Mercurial > cortex
diff org/hearing.org @ 166:e4c2cc79a171
renamed ear.org to hearing.org
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sat, 04 Feb 2012 03:38:05 -0700 |
parents | org/ear.org@362bc30a3d41 |
children | 9e6a30b8c99a |
line wrap: on
line diff
1.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 1.2 +++ b/org/hearing.org Sat Feb 04 03:38:05 2012 -0700 1.3 @@ -0,0 +1,949 @@ 1.4 +#+title: Simulated Sense of Hearing 1.5 +#+author: Robert McIntyre 1.6 +#+email: rlm@mit.edu 1.7 +#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 1.8 +#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI 1.9 +#+SETUPFILE: ../../aurellem/org/setup.org 1.10 +#+INCLUDE: ../../aurellem/org/level-0.org 1.11 +#+BABEL: :exports both :noweb yes :cache no :mkdirp yes 1.12 + 1.13 +* Hearing 1.14 + 1.15 +I want to be able to place ears in a similar manner to how I place 1.16 +the eyes. I want to be able to place ears in a unique spatial 1.17 +position, and receive as output at every tick the F.F.T. of whatever 1.18 +signals are happening at that point. 1.19 + 1.20 +Hearing is one of the more difficult senses to simulate, because there 1.21 +is less support for obtaining the actual sound data that is processed 1.22 +by jMonkeyEngine3. 1.23 + 1.24 +jMonkeyEngine's sound system works as follows: 1.25 + 1.26 + - jMonkeyEngine uses the =AppSettings= for the particular application 1.27 + to determine what sort of =AudioRenderer= should be used. 1.28 + - although some support is provided for multiple AudioRendering 1.29 + backends, jMonkeyEngine at the time of this writing will either 1.30 + pick no AudioRenderer at all, or the =LwjglAudioRenderer= 1.31 + - jMonkeyEngine tries to figure out what sort of system you're 1.32 + running and extracts the appropriate native libraries. 1.33 + - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game 1.34 + Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] 1.35 + - =OpenAL= calculates the 3D sound localization and feeds a stream of 1.36 + sound to any of various sound output devices with which it knows 1.37 + how to communicate. 1.38 + 1.39 +A consequence of this is that there's no way to access the actual 1.40 +sound data produced by =OpenAL=. Even worse, =OpenAL= only supports 1.41 +one /listener/, which normally isn't a problem for games, but becomes 1.42 +a problem when trying to make multiple AI creatures that can each hear 1.43 +the world from a different perspective. 1.44 + 1.45 +To make many AI creatures in jMonkeyEngine that can each hear the 1.46 +world from their own perspective, it is necessary to go all the way 1.47 +back to =OpenAL= and implement support for simulated hearing there. 1.48 + 1.49 +* Extending =OpenAL= 1.50 +** =OpenAL= Devices 1.51 + 1.52 +=OpenAL= goes to great lengths to support many different systems, all 1.53 +with different sound capabilities and interfaces. It accomplishes this 1.54 +difficult task by providing code for many different sound backends in 1.55 +pseudo-objects called /Devices/. There's a device for the Linux Open 1.56 +Sound System and the Advanced Linux Sound Architecture, there's one 1.57 +for Direct Sound on Windows, there's even one for Solaris. =OpenAL= 1.58 +solves the problem of platform independence by providing all these 1.59 +Devices. 1.60 + 1.61 +Wrapper libraries such as LWJGL are free to examine the system on 1.62 +which they are running and then select an appropriate device for that 1.63 +system. 1.64 + 1.65 +There are also a few "special" devices that don't interface with any 1.66 +particular system. These include the Null Device, which doesn't do 1.67 +anything, and the Wave Device, which writes whatever sound it receives 1.68 +to a file, if everything has been set up correctly when configuring 1.69 +=OpenAL=. 1.70 + 1.71 +Actual mixing of the sound data happens in the Devices, and they are 1.72 +the only point in the sound rendering process where this data is 1.73 +available. 1.74 + 1.75 +Therefore, in order to support multiple listeners, and get the sound 1.76 +data in a form that the AIs can use, it is necessary to create a new 1.77 +Device, which supports this features. 1.78 + 1.79 +** The Send Device 1.80 +Adding a device to OpenAL is rather tricky -- there are five separate 1.81 +files in the =OpenAL= source tree that must be modified to do so. I've 1.82 +documented this process [[./add-new-device.org][here]] for anyone who is interested. 1.83 + 1.84 + 1.85 +Onward to that actual Device! 1.86 + 1.87 +again, my objectives are: 1.88 + 1.89 + - Support Multiple Listeners from jMonkeyEngine3 1.90 + - Get access to the rendered sound data for further processing from 1.91 + clojure. 1.92 + 1.93 +** =send.c= 1.94 + 1.95 +** Header 1.96 +#+name: send-header 1.97 +#+begin_src C 1.98 +#include "config.h" 1.99 +#include <stdlib.h> 1.100 +#include "alMain.h" 1.101 +#include "AL/al.h" 1.102 +#include "AL/alc.h" 1.103 +#include "alSource.h" 1.104 +#include <jni.h> 1.105 + 1.106 +//////////////////// Summary 1.107 + 1.108 +struct send_data; 1.109 +struct context_data; 1.110 + 1.111 +static void addContext(ALCdevice *, ALCcontext *); 1.112 +static void syncContexts(ALCcontext *master, ALCcontext *slave); 1.113 +static void syncSources(ALsource *master, ALsource *slave, 1.114 + ALCcontext *masterCtx, ALCcontext *slaveCtx); 1.115 + 1.116 +static void syncSourcei(ALuint master, ALuint slave, 1.117 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.118 +static void syncSourcef(ALuint master, ALuint slave, 1.119 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.120 +static void syncSource3f(ALuint master, ALuint slave, 1.121 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.122 + 1.123 +static void swapInContext(ALCdevice *, struct context_data *); 1.124 +static void saveContext(ALCdevice *, struct context_data *); 1.125 +static void limitContext(ALCdevice *, ALCcontext *); 1.126 +static void unLimitContext(ALCdevice *); 1.127 + 1.128 +static void init(ALCdevice *); 1.129 +static void renderData(ALCdevice *, int samples); 1.130 + 1.131 +#define UNUSED(x) (void)(x) 1.132 +#+end_src 1.133 + 1.134 +The main idea behind the Send device is to take advantage of the fact 1.135 +that LWJGL only manages one /context/ when using OpenAL. A /context/ 1.136 +is like a container that holds samples and keeps track of where the 1.137 +listener is. In order to support multiple listeners, the Send device 1.138 +identifies the LWJGL context as the master context, and creates any 1.139 +number of slave contexts to represent additional listeners. Every 1.140 +time the device renders sound, it synchronizes every source from the 1.141 +master LWJGL context to the slave contexts. Then, it renders each 1.142 +context separately, using a different listener for each one. The 1.143 +rendered sound is made available via JNI to jMonkeyEngine. 1.144 + 1.145 +To recap, the process is: 1.146 + - Set the LWJGL context as "master" in the =init()= method. 1.147 + - Create any number of additional contexts via =addContext()= 1.148 + - At every call to =renderData()= sync the master context with the 1.149 + slave contexts with =syncContexts()= 1.150 + - =syncContexts()= calls =syncSources()= to sync all the sources 1.151 + which are in the master context. 1.152 + - =limitContext()= and =unLimitContext()= make it possible to render 1.153 + only one context at a time. 1.154 + 1.155 +** Necessary State 1.156 +#+name: send-state 1.157 +#+begin_src C 1.158 +//////////////////// State 1.159 + 1.160 +typedef struct context_data { 1.161 + ALfloat ClickRemoval[MAXCHANNELS]; 1.162 + ALfloat PendingClicks[MAXCHANNELS]; 1.163 + ALvoid *renderBuffer; 1.164 + ALCcontext *ctx; 1.165 +} context_data; 1.166 + 1.167 +typedef struct send_data { 1.168 + ALuint size; 1.169 + context_data **contexts; 1.170 + ALuint numContexts; 1.171 + ALuint maxContexts; 1.172 +} send_data; 1.173 +#+end_src 1.174 + 1.175 +Switching between contexts is not the normal operation of a Device, 1.176 +and one of the problems with doing so is that a Device normally keeps 1.177 +around a few pieces of state such as the =ClickRemoval= array above 1.178 +which will become corrupted if the contexts are not done in 1.179 +parallel. The solution is to create a copy of this normally global 1.180 +device state for each context, and copy it back and forth into and out 1.181 +of the actual device state whenever a context is rendered. 1.182 + 1.183 +** Synchronization Macros 1.184 +#+name: sync-macros 1.185 +#+begin_src C 1.186 +//////////////////// Context Creation / Synchronization 1.187 + 1.188 +#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ 1.189 + void NAME (ALuint sourceID1, ALuint sourceID2, \ 1.190 + ALCcontext *ctx1, ALCcontext *ctx2, \ 1.191 + ALenum param){ \ 1.192 + INIT_EXPR; \ 1.193 + ALCcontext *current = alcGetCurrentContext(); \ 1.194 + alcMakeContextCurrent(ctx1); \ 1.195 + GET_EXPR; \ 1.196 + alcMakeContextCurrent(ctx2); \ 1.197 + SET_EXPR; \ 1.198 + alcMakeContextCurrent(current); \ 1.199 + } 1.200 + 1.201 +#define MAKE_SYNC(NAME, TYPE, GET, SET) \ 1.202 + _MAKE_SYNC(NAME, \ 1.203 + TYPE value, \ 1.204 + GET(sourceID1, param, &value), \ 1.205 + SET(sourceID2, param, value)) 1.206 + 1.207 +#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ 1.208 + _MAKE_SYNC(NAME, \ 1.209 + TYPE value1; TYPE value2; TYPE value3;, \ 1.210 + GET(sourceID1, param, &value1, &value2, &value3), \ 1.211 + SET(sourceID2, param, value1, value2, value3)) 1.212 + 1.213 +MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); 1.214 +MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); 1.215 +MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); 1.216 +MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); 1.217 + 1.218 +#+end_src 1.219 + 1.220 +Setting the state of an =OpenAL= source is done with the =alSourcei=, 1.221 +=alSourcef=, =alSource3i=, and =alSource3f= functions. In order to 1.222 +completely synchronize two sources, it is necessary to use all of 1.223 +them. These macros help to condense the otherwise repetitive 1.224 +synchronization code involving these similar low-level =OpenAL= functions. 1.225 + 1.226 +** Source Synchronization 1.227 +#+name: sync-sources 1.228 +#+begin_src C 1.229 +void syncSources(ALsource *masterSource, ALsource *slaveSource, 1.230 + ALCcontext *masterCtx, ALCcontext *slaveCtx){ 1.231 + ALuint master = masterSource->source; 1.232 + ALuint slave = slaveSource->source; 1.233 + ALCcontext *current = alcGetCurrentContext(); 1.234 + 1.235 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); 1.236 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); 1.237 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); 1.238 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); 1.239 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); 1.240 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); 1.241 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); 1.242 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); 1.243 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); 1.244 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); 1.245 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); 1.246 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); 1.247 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); 1.248 + 1.249 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); 1.250 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); 1.251 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); 1.252 + 1.253 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); 1.254 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); 1.255 + 1.256 + alcMakeContextCurrent(masterCtx); 1.257 + ALint source_type; 1.258 + alGetSourcei(master, AL_SOURCE_TYPE, &source_type); 1.259 + 1.260 + // Only static sources are currently synchronized! 1.261 + if (AL_STATIC == source_type){ 1.262 + ALint master_buffer; 1.263 + ALint slave_buffer; 1.264 + alGetSourcei(master, AL_BUFFER, &master_buffer); 1.265 + alcMakeContextCurrent(slaveCtx); 1.266 + alGetSourcei(slave, AL_BUFFER, &slave_buffer); 1.267 + if (master_buffer != slave_buffer){ 1.268 + alSourcei(slave, AL_BUFFER, master_buffer); 1.269 + } 1.270 + } 1.271 + 1.272 + // Synchronize the state of the two sources. 1.273 + alcMakeContextCurrent(masterCtx); 1.274 + ALint masterState; 1.275 + ALint slaveState; 1.276 + 1.277 + alGetSourcei(master, AL_SOURCE_STATE, &masterState); 1.278 + alcMakeContextCurrent(slaveCtx); 1.279 + alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); 1.280 + 1.281 + if (masterState != slaveState){ 1.282 + switch (masterState){ 1.283 + case AL_INITIAL : alSourceRewind(slave); break; 1.284 + case AL_PLAYING : alSourcePlay(slave); break; 1.285 + case AL_PAUSED : alSourcePause(slave); break; 1.286 + case AL_STOPPED : alSourceStop(slave); break; 1.287 + } 1.288 + } 1.289 + // Restore whatever context was previously active. 1.290 + alcMakeContextCurrent(current); 1.291 +} 1.292 +#+end_src 1.293 +This function is long because it has to exhaustively go through all the 1.294 +possible state that a source can have and make sure that it is the 1.295 +same between the master and slave sources. I'd like to take this 1.296 +moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very 1.297 +good description of =OpenAL='s internals. 1.298 + 1.299 +** Context Synchronization 1.300 +#+name: sync-contexts 1.301 +#+begin_src C 1.302 +void syncContexts(ALCcontext *master, ALCcontext *slave){ 1.303 + /* If there aren't sufficient sources in slave to mirror 1.304 + the sources in master, create them. */ 1.305 + ALCcontext *current = alcGetCurrentContext(); 1.306 + 1.307 + UIntMap *masterSourceMap = &(master->SourceMap); 1.308 + UIntMap *slaveSourceMap = &(slave->SourceMap); 1.309 + ALuint numMasterSources = masterSourceMap->size; 1.310 + ALuint numSlaveSources = slaveSourceMap->size; 1.311 + 1.312 + alcMakeContextCurrent(slave); 1.313 + if (numSlaveSources < numMasterSources){ 1.314 + ALuint numMissingSources = numMasterSources - numSlaveSources; 1.315 + ALuint newSources[numMissingSources]; 1.316 + alGenSources(numMissingSources, newSources); 1.317 + } 1.318 + 1.319 + /* Now, slave is guaranteed to have at least as many sources 1.320 + as master. Sync each source from master to the corresponding 1.321 + source in slave. */ 1.322 + int i; 1.323 + for(i = 0; i < masterSourceMap->size; i++){ 1.324 + syncSources((ALsource*)masterSourceMap->array[i].value, 1.325 + (ALsource*)slaveSourceMap->array[i].value, 1.326 + master, slave); 1.327 + } 1.328 + alcMakeContextCurrent(current); 1.329 +} 1.330 +#+end_src 1.331 + 1.332 +Most of the hard work in Context Synchronization is done in 1.333 +=syncSources()=. The only thing that =syncContexts()= has to worry 1.334 +about is automatically creating new sources whenever a slave context 1.335 +does not have the same number of sources as the master context. 1.336 + 1.337 +** Context Creation 1.338 +#+name: context-creation 1.339 +#+begin_src C 1.340 +static void addContext(ALCdevice *Device, ALCcontext *context){ 1.341 + send_data *data = (send_data*)Device->ExtraData; 1.342 + // expand array if necessary 1.343 + if (data->numContexts >= data->maxContexts){ 1.344 + ALuint newMaxContexts = data->maxContexts*2 + 1; 1.345 + data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); 1.346 + data->maxContexts = newMaxContexts; 1.347 + } 1.348 + // create context_data and add it to the main array 1.349 + context_data *ctxData; 1.350 + ctxData = (context_data*)calloc(1, sizeof(*ctxData)); 1.351 + ctxData->renderBuffer = 1.352 + malloc(BytesFromDevFmt(Device->FmtType) * 1.353 + Device->NumChan * Device->UpdateSize); 1.354 + ctxData->ctx = context; 1.355 + 1.356 + data->contexts[data->numContexts] = ctxData; 1.357 + data->numContexts++; 1.358 +} 1.359 +#+end_src 1.360 + 1.361 +Here, the slave context is created, and it's data is stored in the 1.362 +device-wide =ExtraData= structure. The =renderBuffer= that is created 1.363 +here is where the rendered sound samples for this slave context will 1.364 +eventually go. 1.365 + 1.366 +** Context Switching 1.367 +#+name: context-switching 1.368 +#+begin_src C 1.369 +//////////////////// Context Switching 1.370 + 1.371 +/* A device brings along with it two pieces of state 1.372 + * which have to be swapped in and out with each context. 1.373 + */ 1.374 +static void swapInContext(ALCdevice *Device, context_data *ctxData){ 1.375 + memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.376 + memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.377 +} 1.378 + 1.379 +static void saveContext(ALCdevice *Device, context_data *ctxData){ 1.380 + memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.381 + memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.382 +} 1.383 + 1.384 +static ALCcontext **currentContext; 1.385 +static ALuint currentNumContext; 1.386 + 1.387 +/* By default, all contexts are rendered at once for each call to aluMixData. 1.388 + * This function uses the internals of the ALCdevice struct to temporally 1.389 + * cause aluMixData to only render the chosen context. 1.390 + */ 1.391 +static void limitContext(ALCdevice *Device, ALCcontext *ctx){ 1.392 + currentContext = Device->Contexts; 1.393 + currentNumContext = Device->NumContexts; 1.394 + Device->Contexts = &ctx; 1.395 + Device->NumContexts = 1; 1.396 +} 1.397 + 1.398 +static void unLimitContext(ALCdevice *Device){ 1.399 + Device->Contexts = currentContext; 1.400 + Device->NumContexts = currentNumContext; 1.401 +} 1.402 +#+end_src 1.403 + 1.404 +=OpenAL= normally renders all Contexts in parallel, outputting the 1.405 +whole result to the buffer. It does this by iterating over the 1.406 +Device->Contexts array and rendering each context to the buffer in 1.407 +turn. By temporally setting Device->NumContexts to 1 and adjusting 1.408 +the Device's context list to put the desired context-to-be-rendered 1.409 +into position 0, we can get trick =OpenAL= into rendering each slave 1.410 +context separate from all the others. 1.411 + 1.412 +** Main Device Loop 1.413 +#+name: main-loop 1.414 +#+begin_src C 1.415 +//////////////////// Main Device Loop 1.416 + 1.417 +/* Establish the LWJGL context as the master context, which will 1.418 + * be synchronized to all the slave contexts 1.419 + */ 1.420 +static void init(ALCdevice *Device){ 1.421 + ALCcontext *masterContext = alcGetCurrentContext(); 1.422 + addContext(Device, masterContext); 1.423 +} 1.424 + 1.425 + 1.426 +static void renderData(ALCdevice *Device, int samples){ 1.427 + if(!Device->Connected){return;} 1.428 + send_data *data = (send_data*)Device->ExtraData; 1.429 + ALCcontext *current = alcGetCurrentContext(); 1.430 + 1.431 + ALuint i; 1.432 + for (i = 1; i < data->numContexts; i++){ 1.433 + syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); 1.434 + } 1.435 + 1.436 + if ((ALuint) samples > Device->UpdateSize){ 1.437 + printf("exceeding internal buffer size; dropping samples\n"); 1.438 + printf("requested %d; available %d\n", samples, Device->UpdateSize); 1.439 + samples = (int) Device->UpdateSize; 1.440 + } 1.441 + 1.442 + for (i = 0; i < data->numContexts; i++){ 1.443 + context_data *ctxData = data->contexts[i]; 1.444 + ALCcontext *ctx = ctxData->ctx; 1.445 + alcMakeContextCurrent(ctx); 1.446 + limitContext(Device, ctx); 1.447 + swapInContext(Device, ctxData); 1.448 + aluMixData(Device, ctxData->renderBuffer, samples); 1.449 + saveContext(Device, ctxData); 1.450 + unLimitContext(Device); 1.451 + } 1.452 + alcMakeContextCurrent(current); 1.453 +} 1.454 +#+end_src 1.455 + 1.456 +The main loop synchronizes the master LWJGL context with all the slave 1.457 +contexts, then walks each context, rendering just that context to it's 1.458 +audio-sample storage buffer. 1.459 + 1.460 +** JNI Methods 1.461 + 1.462 +At this point, we have the ability to create multiple listeners by 1.463 +using the master/slave context trick, and the rendered audio data is 1.464 +waiting patiently in internal buffers, one for each listener. We need 1.465 +a way to transport this information to Java, and also a way to drive 1.466 +this device from Java. The following JNI interface code is inspired 1.467 +by the way LWJGL interfaces with =OpenAL=. 1.468 + 1.469 +*** step 1.470 +#+name: jni-step 1.471 +#+begin_src C 1.472 +//////////////////// JNI Methods 1.473 + 1.474 +#include "com_aurellem_send_AudioSend.h" 1.475 + 1.476 +/* 1.477 + * Class: com_aurellem_send_AudioSend 1.478 + * Method: nstep 1.479 + * Signature: (JI)V 1.480 + */ 1.481 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep 1.482 +(JNIEnv *env, jclass clazz, jlong device, jint samples){ 1.483 + UNUSED(env);UNUSED(clazz);UNUSED(device); 1.484 + renderData((ALCdevice*)((intptr_t)device), samples); 1.485 +} 1.486 +#+end_src 1.487 +This device, unlike most of the other devices in =OpenAL=, does not 1.488 +render sound unless asked. This enables the system to slow down or 1.489 +speed up depending on the needs of the AIs who are using it to 1.490 +listen. If the device tried to render samples in real-time, a 1.491 +complicated AI whose mind takes 100 seconds of computer time to 1.492 +simulate 1 second of AI-time would miss almost all of the sound in 1.493 +its environment. 1.494 + 1.495 + 1.496 +*** getSamples 1.497 +#+name: jni-get-samples 1.498 +#+begin_src C 1.499 +/* 1.500 + * Class: com_aurellem_send_AudioSend 1.501 + * Method: ngetSamples 1.502 + * Signature: (JLjava/nio/ByteBuffer;III)V 1.503 + */ 1.504 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples 1.505 +(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, 1.506 + jint samples, jint n){ 1.507 + UNUSED(clazz); 1.508 + 1.509 + ALvoid *buffer_address = 1.510 + ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); 1.511 + ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); 1.512 + send_data *data = (send_data*)recorder->ExtraData; 1.513 + if ((ALuint)n > data->numContexts){return;} 1.514 + memcpy(buffer_address, data->contexts[n]->renderBuffer, 1.515 + BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); 1.516 +} 1.517 +#+end_src 1.518 + 1.519 +This is the transport layer between C and Java that will eventually 1.520 +allow us to access rendered sound data from clojure. 1.521 + 1.522 +*** Listener Management 1.523 + 1.524 +=addListener=, =setNthListenerf=, and =setNthListener3f= are 1.525 +necessary to change the properties of any listener other than the 1.526 +master one, since only the listener of the current active context is 1.527 +affected by the normal =OpenAL= listener calls. 1.528 +#+name: listener-manage 1.529 +#+begin_src C 1.530 +/* 1.531 + * Class: com_aurellem_send_AudioSend 1.532 + * Method: naddListener 1.533 + * Signature: (J)V 1.534 + */ 1.535 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener 1.536 +(JNIEnv *env, jclass clazz, jlong device){ 1.537 + UNUSED(env); UNUSED(clazz); 1.538 + //printf("creating new context via naddListener\n"); 1.539 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.540 + ALCcontext *new = alcCreateContext(Device, NULL); 1.541 + addContext(Device, new); 1.542 +} 1.543 + 1.544 +/* 1.545 + * Class: com_aurellem_send_AudioSend 1.546 + * Method: nsetNthListener3f 1.547 + * Signature: (IFFFJI)V 1.548 + */ 1.549 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f 1.550 + (JNIEnv *env, jclass clazz, jint param, 1.551 + jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ 1.552 + UNUSED(env);UNUSED(clazz); 1.553 + 1.554 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.555 + send_data *data = (send_data*)Device->ExtraData; 1.556 + 1.557 + ALCcontext *current = alcGetCurrentContext(); 1.558 + if ((ALuint)contextNum > data->numContexts){return;} 1.559 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.560 + alListener3f(param, v1, v2, v3); 1.561 + alcMakeContextCurrent(current); 1.562 +} 1.563 + 1.564 +/* 1.565 + * Class: com_aurellem_send_AudioSend 1.566 + * Method: nsetNthListenerf 1.567 + * Signature: (IFJI)V 1.568 + */ 1.569 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf 1.570 +(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, 1.571 + jint contextNum){ 1.572 + 1.573 + UNUSED(env);UNUSED(clazz); 1.574 + 1.575 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.576 + send_data *data = (send_data*)Device->ExtraData; 1.577 + 1.578 + ALCcontext *current = alcGetCurrentContext(); 1.579 + if ((ALuint)contextNum > data->numContexts){return;} 1.580 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.581 + alListenerf(param, v1); 1.582 + alcMakeContextCurrent(current); 1.583 +} 1.584 +#+end_src 1.585 + 1.586 +*** Initialization 1.587 +=initDevice= is called from the Java side after LWJGL has created its 1.588 +context, and before any calls to =addListener=. It establishes the 1.589 +LWJGL context as the master context. 1.590 + 1.591 +=getAudioFormat= is a convenience function that uses JNI to build up a 1.592 +=javax.sound.sampled.AudioFormat= object from data in the Device. This 1.593 +way, there is no ambiguity about what the bits created by =step= and 1.594 +returned by =getSamples= mean. 1.595 +#+name: jni-init 1.596 +#+begin_src C 1.597 +/* 1.598 + * Class: com_aurellem_send_AudioSend 1.599 + * Method: ninitDevice 1.600 + * Signature: (J)V 1.601 + */ 1.602 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice 1.603 +(JNIEnv *env, jclass clazz, jlong device){ 1.604 + UNUSED(env);UNUSED(clazz); 1.605 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.606 + init(Device); 1.607 +} 1.608 + 1.609 +/* 1.610 + * Class: com_aurellem_send_AudioSend 1.611 + * Method: ngetAudioFormat 1.612 + * Signature: (J)Ljavax/sound/sampled/AudioFormat; 1.613 + */ 1.614 +JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat 1.615 +(JNIEnv *env, jclass clazz, jlong device){ 1.616 + UNUSED(clazz); 1.617 + jclass AudioFormatClass = 1.618 + (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); 1.619 + jmethodID AudioFormatConstructor = 1.620 + (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V"); 1.621 + 1.622 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.623 + int isSigned; 1.624 + switch (Device->FmtType) 1.625 + { 1.626 + case DevFmtUByte: 1.627 + case DevFmtUShort: isSigned = 0; break; 1.628 + default : isSigned = 1; 1.629 + } 1.630 + float frequency = Device->Frequency; 1.631 + int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); 1.632 + int channels = Device->NumChan; 1.633 + jobject format = (*env)-> 1.634 + NewObject( 1.635 + env,AudioFormatClass,AudioFormatConstructor, 1.636 + frequency, 1.637 + bitsPerFrame, 1.638 + channels, 1.639 + isSigned, 1.640 + 0); 1.641 + return format; 1.642 +} 1.643 +#+end_src 1.644 + 1.645 +** Boring Device management stuff 1.646 +This code is more-or-less copied verbatim from the other =OpenAL= 1.647 +backends. It's the basis for =OpenAL='s primitive object system. 1.648 +#+name: device-init 1.649 +#+begin_src C 1.650 +//////////////////// Device Initialization / Management 1.651 + 1.652 +static const ALCchar sendDevice[] = "Multiple Audio Send"; 1.653 + 1.654 +static ALCboolean send_open_playback(ALCdevice *device, 1.655 + const ALCchar *deviceName) 1.656 +{ 1.657 + send_data *data; 1.658 + // stop any buffering for stdout, so that I can 1.659 + // see the printf statements in my terminal immediately 1.660 + setbuf(stdout, NULL); 1.661 + 1.662 + if(!deviceName) 1.663 + deviceName = sendDevice; 1.664 + else if(strcmp(deviceName, sendDevice) != 0) 1.665 + return ALC_FALSE; 1.666 + data = (send_data*)calloc(1, sizeof(*data)); 1.667 + device->szDeviceName = strdup(deviceName); 1.668 + device->ExtraData = data; 1.669 + return ALC_TRUE; 1.670 +} 1.671 + 1.672 +static void send_close_playback(ALCdevice *device) 1.673 +{ 1.674 + send_data *data = (send_data*)device->ExtraData; 1.675 + alcMakeContextCurrent(NULL); 1.676 + ALuint i; 1.677 + // Destroy all slave contexts. LWJGL will take care of 1.678 + // its own context. 1.679 + for (i = 1; i < data->numContexts; i++){ 1.680 + context_data *ctxData = data->contexts[i]; 1.681 + alcDestroyContext(ctxData->ctx); 1.682 + free(ctxData->renderBuffer); 1.683 + free(ctxData); 1.684 + } 1.685 + free(data); 1.686 + device->ExtraData = NULL; 1.687 +} 1.688 + 1.689 +static ALCboolean send_reset_playback(ALCdevice *device) 1.690 +{ 1.691 + SetDefaultWFXChannelOrder(device); 1.692 + return ALC_TRUE; 1.693 +} 1.694 + 1.695 +static void send_stop_playback(ALCdevice *Device){ 1.696 + UNUSED(Device); 1.697 +} 1.698 + 1.699 +static const BackendFuncs send_funcs = { 1.700 + send_open_playback, 1.701 + send_close_playback, 1.702 + send_reset_playback, 1.703 + send_stop_playback, 1.704 + NULL, 1.705 + NULL, /* These would be filled with functions to */ 1.706 + NULL, /* handle capturing audio if we we into that */ 1.707 + NULL, /* sort of thing... */ 1.708 + NULL, 1.709 + NULL 1.710 +}; 1.711 + 1.712 +ALCboolean alc_send_init(BackendFuncs *func_list){ 1.713 + *func_list = send_funcs; 1.714 + return ALC_TRUE; 1.715 +} 1.716 + 1.717 +void alc_send_deinit(void){} 1.718 + 1.719 +void alc_send_probe(enum DevProbe type) 1.720 +{ 1.721 + switch(type) 1.722 + { 1.723 + case DEVICE_PROBE: 1.724 + AppendDeviceList(sendDevice); 1.725 + break; 1.726 + case ALL_DEVICE_PROBE: 1.727 + AppendAllDeviceList(sendDevice); 1.728 + break; 1.729 + case CAPTURE_DEVICE_PROBE: 1.730 + break; 1.731 + } 1.732 +} 1.733 +#+end_src 1.734 + 1.735 +* The Java interface, =AudioSend= 1.736 + 1.737 +The Java interface to the Send Device follows naturally from the JNI 1.738 +definitions. It is included here for completeness. The only thing here 1.739 +of note is the =deviceID=. This is available from LWJGL, but to only 1.740 +way to get it is reflection. Unfortunately, there is no other way to 1.741 +control the Send device than to obtain a pointer to it. 1.742 + 1.743 +#+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code 1.744 + 1.745 +* Finally, Ears in clojure! 1.746 + 1.747 +Now that the infrastructure is complete (modulo a few patches to 1.748 +jMonkeyEngine3 to support accessing this modified version of =OpenAL= 1.749 +that are not worth discussing), the clojure ear abstraction is rather 1.750 +simple. Just as there were =SceneProcessors= for vision, there are 1.751 +now =SoundProcessors= for hearing. 1.752 + 1.753 +#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java 1.754 + 1.755 + 1.756 +Ears work the same way as vision. 1.757 + 1.758 +(hearing creature) will return [init-functions sensor-functions]. The 1.759 +init functions each take the world and register a SoundProcessor that 1.760 +does foureier transforms on the incommong sound data, making it 1.761 +available to each sensor function. 1.762 + 1.763 + 1.764 +#+name: ears 1.765 +#+begin_src clojure 1.766 +(ns cortex.hearing 1.767 + "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple 1.768 + listeners at different positions in the same world. Automatically 1.769 + reads ear-nodes from specially prepared blender files and 1.770 + instantiates them in the world as actual ears." 1.771 + {:author "Robert McIntyre"} 1.772 + (:use (cortex world util sense)) 1.773 + (:use clojure.contrib.def) 1.774 + (:import java.nio.ByteBuffer) 1.775 + (:import org.tritonus.share.sampled.FloatSampleTools) 1.776 + (:import (com.aurellem.capture.audio 1.777 + SoundProcessor AudioSendRenderer)) 1.778 + (:import javax.sound.sampled.AudioFormat) 1.779 + (:import (com.jme3.scene Spatial Node)) 1.780 + (:import com.jme3.audio.Listener) 1.781 + (:import com.jme3.app.Application) 1.782 + (:import com.jme3.scene.control.AbstractControl)) 1.783 + 1.784 +(defn sound-processor 1.785 + "Deals with converting ByteBuffers into Vectors of floats so that 1.786 + the continuation functions can be defined in terms of immutable 1.787 + stuff." 1.788 + [continuation] 1.789 + (proxy [SoundProcessor] [] 1.790 + (cleanup []) 1.791 + (process 1.792 + [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] 1.793 + (let [bytes (byte-array numSamples) 1.794 + num-floats (/ numSamples (.getFrameSize audioFormat)) 1.795 + floats (float-array num-floats)] 1.796 + (.get audioSamples bytes 0 numSamples) 1.797 + (FloatSampleTools/byte2floatInterleaved 1.798 + bytes 0 floats 0 num-floats audioFormat) 1.799 + (continuation 1.800 + (vec floats)))))) 1.801 + 1.802 +(defvar 1.803 + ^{:arglists '([creature])} 1.804 + ears 1.805 + (sense-nodes "ears") 1.806 + "Return the children of the creature's \"ears\" node.") 1.807 + 1.808 +(defn update-listener-velocity! 1.809 + "Update the listener's velocity every update loop." 1.810 + [#^Spatial obj #^Listener lis] 1.811 + (let [old-position (atom (.getLocation lis))] 1.812 + (.addControl 1.813 + obj 1.814 + (proxy [AbstractControl] [] 1.815 + (controlUpdate [tpf] 1.816 + (let [new-position (.getLocation lis)] 1.817 + (.setVelocity 1.818 + lis 1.819 + (.mult (.subtract new-position @old-position) 1.820 + (float (/ tpf)))) 1.821 + (reset! old-position new-position))) 1.822 + (controlRender [_ _]))))) 1.823 + 1.824 +(defn create-listener! 1.825 + "Create a Listener centered on the current position of 'ear 1.826 + which follows the closest physical node in 'creature and 1.827 + sends sound data to 'continuation." 1.828 + [#^Application world #^Node creature #^Spatial ear continuation] 1.829 + (let [target (closest-node creature ear) 1.830 + lis (Listener.) 1.831 + audio-renderer (.getAudioRenderer world) 1.832 + sp (sound-processor continuation)] 1.833 + (.setLocation lis (.getWorldTranslation ear)) 1.834 + (.setRotation lis (.getWorldRotation ear)) 1.835 + (bind-sense target lis) 1.836 + (update-listener-velocity! target lis) 1.837 + (.addListener audio-renderer lis) 1.838 + (.registerSoundProcessor audio-renderer lis sp))) 1.839 + 1.840 +(defn hearing-fn 1.841 + "Returns a functon which returns auditory sensory data when called 1.842 + inside a running simulation." 1.843 + [#^Node creature #^Spatial ear] 1.844 + (let [hearing-data (atom []) 1.845 + register-listener! 1.846 + (runonce 1.847 + (fn [#^Application world] 1.848 + (create-listener! 1.849 + world creature ear 1.850 + (fn [data] 1.851 + (reset! hearing-data (vec data))))))] 1.852 + (fn [#^Application world] 1.853 + (register-listener! world) 1.854 + (let [data @hearing-data 1.855 + topology 1.856 + (vec (map #(vector % 0) (range 0 (count data)))) 1.857 + scaled-data 1.858 + (vec 1.859 + (map 1.860 + #(rem (int (* 255 (/ (+ 1 %) 2))) 256) 1.861 + data))] 1.862 + [topology scaled-data])))) 1.863 + 1.864 +(defn hearing! 1.865 + "Endow the creature in a particular world with the sense of 1.866 + hearing. Will return a sequence of functions, one for each ear, 1.867 + which when called will return the auditory data from that ear." 1.868 + [#^Node creature] 1.869 + (for [ear (ears creature)] 1.870 + (hearing-fn creature ear))) 1.871 + 1.872 + 1.873 +#+end_src 1.874 + 1.875 +* Example 1.876 + 1.877 +#+name: test-hearing 1.878 +#+begin_src clojure :results silent 1.879 +(ns cortex.test.hearing 1.880 + (:use (cortex world util hearing)) 1.881 + (:import (com.jme3.audio AudioNode Listener)) 1.882 + (:import com.jme3.scene.Node 1.883 + com.jme3.system.AppSettings)) 1.884 + 1.885 +(defn setup-fn [world] 1.886 + (let [listener (Listener.)] 1.887 + (add-ear world listener #(println-repl (nth % 0))))) 1.888 + 1.889 +(defn play-sound [node world value] 1.890 + (if (not value) 1.891 + (do 1.892 + (.playSource (.getAudioRenderer world) node)))) 1.893 + 1.894 +(defn test-basic-hearing [] 1.895 + (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)] 1.896 + (world 1.897 + (Node.) 1.898 + {"key-space" (partial play-sound node1)} 1.899 + setup-fn 1.900 + no-op))) 1.901 + 1.902 +(defn test-advanced-hearing 1.903 + "Testing hearing: 1.904 + You should see a blue sphere flying around several 1.905 + cubes. As the sphere approaches each cube, it turns 1.906 + green." 1.907 + [] 1.908 + (doto (com.aurellem.capture.examples.Advanced.) 1.909 + (.setSettings 1.910 + (doto (AppSettings. true) 1.911 + (.setAudioRenderer "Send"))) 1.912 + (.setShowSettings false) 1.913 + (.setPauseOnLostFocus false))) 1.914 + 1.915 +#+end_src 1.916 + 1.917 +This extremely basic program prints out the first sample it encounters 1.918 +at every time stamp. You can see the rendered sound being printed at 1.919 +the REPL. 1.920 + 1.921 + - As a bonus, this method of capturing audio for AI can also be used 1.922 + to capture perfect audio from a jMonkeyEngine application, for use 1.923 + in demos and the like. 1.924 + 1.925 + 1.926 +* COMMENT Code Generation 1.927 + 1.928 +#+begin_src clojure :tangle ../src/cortex/hearing.clj 1.929 +<<ears>> 1.930 +#+end_src 1.931 + 1.932 +#+begin_src clojure :tangle ../src/cortex/test/hearing.clj 1.933 +<<test-hearing>> 1.934 +#+end_src 1.935 + 1.936 +#+begin_src C :tangle ../../audio-send/Alc/backends/send.c 1.937 +<<send-header>> 1.938 +<<send-state>> 1.939 +<<sync-macros>> 1.940 +<<sync-sources>> 1.941 +<<sync-contexts>> 1.942 +<<context-creation>> 1.943 +<<context-switching>> 1.944 +<<main-loop>> 1.945 +<<jni-step>> 1.946 +<<jni-get-samples>> 1.947 +<<listener-manage>> 1.948 +<<jni-init>> 1.949 +<<device-init>> 1.950 +#+end_src 1.951 + 1.952 +