Mercurial > cortex
changeset 166:e4c2cc79a171
renamed ear.org to hearing.org
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sat, 04 Feb 2012 03:38:05 -0700 |
parents | 362bc30a3d41 |
children | 9e6a30b8c99a |
files | org/ear.org org/hearing.org |
diffstat | 2 files changed, 949 insertions(+), 949 deletions(-) [+] |
line wrap: on
line diff
1.1 --- a/org/ear.org Sat Feb 04 03:36:48 2012 -0700 1.2 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 1.3 @@ -1,949 +0,0 @@ 1.4 -#+title: Simulated Sense of Hearing 1.5 -#+author: Robert McIntyre 1.6 -#+email: rlm@mit.edu 1.7 -#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 1.8 -#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI 1.9 -#+SETUPFILE: ../../aurellem/org/setup.org 1.10 -#+INCLUDE: ../../aurellem/org/level-0.org 1.11 -#+BABEL: :exports both :noweb yes :cache no :mkdirp yes 1.12 - 1.13 -* Hearing 1.14 - 1.15 -I want to be able to place ears in a similar manner to how I place 1.16 -the eyes. I want to be able to place ears in a unique spatial 1.17 -position, and receive as output at every tick the F.F.T. of whatever 1.18 -signals are happening at that point. 1.19 - 1.20 -Hearing is one of the more difficult senses to simulate, because there 1.21 -is less support for obtaining the actual sound data that is processed 1.22 -by jMonkeyEngine3. 1.23 - 1.24 -jMonkeyEngine's sound system works as follows: 1.25 - 1.26 - - jMonkeyEngine uses the =AppSettings= for the particular application 1.27 - to determine what sort of =AudioRenderer= should be used. 1.28 - - although some support is provided for multiple AudioRendering 1.29 - backends, jMonkeyEngine at the time of this writing will either 1.30 - pick no AudioRenderer at all, or the =LwjglAudioRenderer= 1.31 - - jMonkeyEngine tries to figure out what sort of system you're 1.32 - running and extracts the appropriate native libraries. 1.33 - - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game 1.34 - Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] 1.35 - - =OpenAL= calculates the 3D sound localization and feeds a stream of 1.36 - sound to any of various sound output devices with which it knows 1.37 - how to communicate. 1.38 - 1.39 -A consequence of this is that there's no way to access the actual 1.40 -sound data produced by =OpenAL=. Even worse, =OpenAL= only supports 1.41 -one /listener/, which normally isn't a problem for games, but becomes 1.42 -a problem when trying to make multiple AI creatures that can each hear 1.43 -the world from a different perspective. 1.44 - 1.45 -To make many AI creatures in jMonkeyEngine that can each hear the 1.46 -world from their own perspective, it is necessary to go all the way 1.47 -back to =OpenAL= and implement support for simulated hearing there. 1.48 - 1.49 -* Extending =OpenAL= 1.50 -** =OpenAL= Devices 1.51 - 1.52 -=OpenAL= goes to great lengths to support many different systems, all 1.53 -with different sound capabilities and interfaces. It accomplishes this 1.54 -difficult task by providing code for many different sound backends in 1.55 -pseudo-objects called /Devices/. There's a device for the Linux Open 1.56 -Sound System and the Advanced Linux Sound Architecture, there's one 1.57 -for Direct Sound on Windows, there's even one for Solaris. =OpenAL= 1.58 -solves the problem of platform independence by providing all these 1.59 -Devices. 1.60 - 1.61 -Wrapper libraries such as LWJGL are free to examine the system on 1.62 -which they are running and then select an appropriate device for that 1.63 -system. 1.64 - 1.65 -There are also a few "special" devices that don't interface with any 1.66 -particular system. These include the Null Device, which doesn't do 1.67 -anything, and the Wave Device, which writes whatever sound it receives 1.68 -to a file, if everything has been set up correctly when configuring 1.69 -=OpenAL=. 1.70 - 1.71 -Actual mixing of the sound data happens in the Devices, and they are 1.72 -the only point in the sound rendering process where this data is 1.73 -available. 1.74 - 1.75 -Therefore, in order to support multiple listeners, and get the sound 1.76 -data in a form that the AIs can use, it is necessary to create a new 1.77 -Device, which supports this features. 1.78 - 1.79 -** The Send Device 1.80 -Adding a device to OpenAL is rather tricky -- there are five separate 1.81 -files in the =OpenAL= source tree that must be modified to do so. I've 1.82 -documented this process [[./add-new-device.org][here]] for anyone who is interested. 1.83 - 1.84 - 1.85 -Onward to that actual Device! 1.86 - 1.87 -again, my objectives are: 1.88 - 1.89 - - Support Multiple Listeners from jMonkeyEngine3 1.90 - - Get access to the rendered sound data for further processing from 1.91 - clojure. 1.92 - 1.93 -** =send.c= 1.94 - 1.95 -** Header 1.96 -#+name: send-header 1.97 -#+begin_src C 1.98 -#include "config.h" 1.99 -#include <stdlib.h> 1.100 -#include "alMain.h" 1.101 -#include "AL/al.h" 1.102 -#include "AL/alc.h" 1.103 -#include "alSource.h" 1.104 -#include <jni.h> 1.105 - 1.106 -//////////////////// Summary 1.107 - 1.108 -struct send_data; 1.109 -struct context_data; 1.110 - 1.111 -static void addContext(ALCdevice *, ALCcontext *); 1.112 -static void syncContexts(ALCcontext *master, ALCcontext *slave); 1.113 -static void syncSources(ALsource *master, ALsource *slave, 1.114 - ALCcontext *masterCtx, ALCcontext *slaveCtx); 1.115 - 1.116 -static void syncSourcei(ALuint master, ALuint slave, 1.117 - ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.118 -static void syncSourcef(ALuint master, ALuint slave, 1.119 - ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.120 -static void syncSource3f(ALuint master, ALuint slave, 1.121 - ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.122 - 1.123 -static void swapInContext(ALCdevice *, struct context_data *); 1.124 -static void saveContext(ALCdevice *, struct context_data *); 1.125 -static void limitContext(ALCdevice *, ALCcontext *); 1.126 -static void unLimitContext(ALCdevice *); 1.127 - 1.128 -static void init(ALCdevice *); 1.129 -static void renderData(ALCdevice *, int samples); 1.130 - 1.131 -#define UNUSED(x) (void)(x) 1.132 -#+end_src 1.133 - 1.134 -The main idea behind the Send device is to take advantage of the fact 1.135 -that LWJGL only manages one /context/ when using OpenAL. A /context/ 1.136 -is like a container that holds samples and keeps track of where the 1.137 -listener is. In order to support multiple listeners, the Send device 1.138 -identifies the LWJGL context as the master context, and creates any 1.139 -number of slave contexts to represent additional listeners. Every 1.140 -time the device renders sound, it synchronizes every source from the 1.141 -master LWJGL context to the slave contexts. Then, it renders each 1.142 -context separately, using a different listener for each one. The 1.143 -rendered sound is made available via JNI to jMonkeyEngine. 1.144 - 1.145 -To recap, the process is: 1.146 - - Set the LWJGL context as "master" in the =init()= method. 1.147 - - Create any number of additional contexts via =addContext()= 1.148 - - At every call to =renderData()= sync the master context with the 1.149 - slave contexts with =syncContexts()= 1.150 - - =syncContexts()= calls =syncSources()= to sync all the sources 1.151 - which are in the master context. 1.152 - - =limitContext()= and =unLimitContext()= make it possible to render 1.153 - only one context at a time. 1.154 - 1.155 -** Necessary State 1.156 -#+name: send-state 1.157 -#+begin_src C 1.158 -//////////////////// State 1.159 - 1.160 -typedef struct context_data { 1.161 - ALfloat ClickRemoval[MAXCHANNELS]; 1.162 - ALfloat PendingClicks[MAXCHANNELS]; 1.163 - ALvoid *renderBuffer; 1.164 - ALCcontext *ctx; 1.165 -} context_data; 1.166 - 1.167 -typedef struct send_data { 1.168 - ALuint size; 1.169 - context_data **contexts; 1.170 - ALuint numContexts; 1.171 - ALuint maxContexts; 1.172 -} send_data; 1.173 -#+end_src 1.174 - 1.175 -Switching between contexts is not the normal operation of a Device, 1.176 -and one of the problems with doing so is that a Device normally keeps 1.177 -around a few pieces of state such as the =ClickRemoval= array above 1.178 -which will become corrupted if the contexts are not done in 1.179 -parallel. The solution is to create a copy of this normally global 1.180 -device state for each context, and copy it back and forth into and out 1.181 -of the actual device state whenever a context is rendered. 1.182 - 1.183 -** Synchronization Macros 1.184 -#+name: sync-macros 1.185 -#+begin_src C 1.186 -//////////////////// Context Creation / Synchronization 1.187 - 1.188 -#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ 1.189 - void NAME (ALuint sourceID1, ALuint sourceID2, \ 1.190 - ALCcontext *ctx1, ALCcontext *ctx2, \ 1.191 - ALenum param){ \ 1.192 - INIT_EXPR; \ 1.193 - ALCcontext *current = alcGetCurrentContext(); \ 1.194 - alcMakeContextCurrent(ctx1); \ 1.195 - GET_EXPR; \ 1.196 - alcMakeContextCurrent(ctx2); \ 1.197 - SET_EXPR; \ 1.198 - alcMakeContextCurrent(current); \ 1.199 - } 1.200 - 1.201 -#define MAKE_SYNC(NAME, TYPE, GET, SET) \ 1.202 - _MAKE_SYNC(NAME, \ 1.203 - TYPE value, \ 1.204 - GET(sourceID1, param, &value), \ 1.205 - SET(sourceID2, param, value)) 1.206 - 1.207 -#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ 1.208 - _MAKE_SYNC(NAME, \ 1.209 - TYPE value1; TYPE value2; TYPE value3;, \ 1.210 - GET(sourceID1, param, &value1, &value2, &value3), \ 1.211 - SET(sourceID2, param, value1, value2, value3)) 1.212 - 1.213 -MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); 1.214 -MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); 1.215 -MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); 1.216 -MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); 1.217 - 1.218 -#+end_src 1.219 - 1.220 -Setting the state of an =OpenAL= source is done with the =alSourcei=, 1.221 -=alSourcef=, =alSource3i=, and =alSource3f= functions. In order to 1.222 -completely synchronize two sources, it is necessary to use all of 1.223 -them. These macros help to condense the otherwise repetitive 1.224 -synchronization code involving these similar low-level =OpenAL= functions. 1.225 - 1.226 -** Source Synchronization 1.227 -#+name: sync-sources 1.228 -#+begin_src C 1.229 -void syncSources(ALsource *masterSource, ALsource *slaveSource, 1.230 - ALCcontext *masterCtx, ALCcontext *slaveCtx){ 1.231 - ALuint master = masterSource->source; 1.232 - ALuint slave = slaveSource->source; 1.233 - ALCcontext *current = alcGetCurrentContext(); 1.234 - 1.235 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); 1.236 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); 1.237 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); 1.238 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); 1.239 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); 1.240 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); 1.241 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); 1.242 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); 1.243 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); 1.244 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); 1.245 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); 1.246 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); 1.247 - syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); 1.248 - 1.249 - syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); 1.250 - syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); 1.251 - syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); 1.252 - 1.253 - syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); 1.254 - syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); 1.255 - 1.256 - alcMakeContextCurrent(masterCtx); 1.257 - ALint source_type; 1.258 - alGetSourcei(master, AL_SOURCE_TYPE, &source_type); 1.259 - 1.260 - // Only static sources are currently synchronized! 1.261 - if (AL_STATIC == source_type){ 1.262 - ALint master_buffer; 1.263 - ALint slave_buffer; 1.264 - alGetSourcei(master, AL_BUFFER, &master_buffer); 1.265 - alcMakeContextCurrent(slaveCtx); 1.266 - alGetSourcei(slave, AL_BUFFER, &slave_buffer); 1.267 - if (master_buffer != slave_buffer){ 1.268 - alSourcei(slave, AL_BUFFER, master_buffer); 1.269 - } 1.270 - } 1.271 - 1.272 - // Synchronize the state of the two sources. 1.273 - alcMakeContextCurrent(masterCtx); 1.274 - ALint masterState; 1.275 - ALint slaveState; 1.276 - 1.277 - alGetSourcei(master, AL_SOURCE_STATE, &masterState); 1.278 - alcMakeContextCurrent(slaveCtx); 1.279 - alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); 1.280 - 1.281 - if (masterState != slaveState){ 1.282 - switch (masterState){ 1.283 - case AL_INITIAL : alSourceRewind(slave); break; 1.284 - case AL_PLAYING : alSourcePlay(slave); break; 1.285 - case AL_PAUSED : alSourcePause(slave); break; 1.286 - case AL_STOPPED : alSourceStop(slave); break; 1.287 - } 1.288 - } 1.289 - // Restore whatever context was previously active. 1.290 - alcMakeContextCurrent(current); 1.291 -} 1.292 -#+end_src 1.293 -This function is long because it has to exhaustively go through all the 1.294 -possible state that a source can have and make sure that it is the 1.295 -same between the master and slave sources. I'd like to take this 1.296 -moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very 1.297 -good description of =OpenAL='s internals. 1.298 - 1.299 -** Context Synchronization 1.300 -#+name: sync-contexts 1.301 -#+begin_src C 1.302 -void syncContexts(ALCcontext *master, ALCcontext *slave){ 1.303 - /* If there aren't sufficient sources in slave to mirror 1.304 - the sources in master, create them. */ 1.305 - ALCcontext *current = alcGetCurrentContext(); 1.306 - 1.307 - UIntMap *masterSourceMap = &(master->SourceMap); 1.308 - UIntMap *slaveSourceMap = &(slave->SourceMap); 1.309 - ALuint numMasterSources = masterSourceMap->size; 1.310 - ALuint numSlaveSources = slaveSourceMap->size; 1.311 - 1.312 - alcMakeContextCurrent(slave); 1.313 - if (numSlaveSources < numMasterSources){ 1.314 - ALuint numMissingSources = numMasterSources - numSlaveSources; 1.315 - ALuint newSources[numMissingSources]; 1.316 - alGenSources(numMissingSources, newSources); 1.317 - } 1.318 - 1.319 - /* Now, slave is guaranteed to have at least as many sources 1.320 - as master. Sync each source from master to the corresponding 1.321 - source in slave. */ 1.322 - int i; 1.323 - for(i = 0; i < masterSourceMap->size; i++){ 1.324 - syncSources((ALsource*)masterSourceMap->array[i].value, 1.325 - (ALsource*)slaveSourceMap->array[i].value, 1.326 - master, slave); 1.327 - } 1.328 - alcMakeContextCurrent(current); 1.329 -} 1.330 -#+end_src 1.331 - 1.332 -Most of the hard work in Context Synchronization is done in 1.333 -=syncSources()=. The only thing that =syncContexts()= has to worry 1.334 -about is automatically creating new sources whenever a slave context 1.335 -does not have the same number of sources as the master context. 1.336 - 1.337 -** Context Creation 1.338 -#+name: context-creation 1.339 -#+begin_src C 1.340 -static void addContext(ALCdevice *Device, ALCcontext *context){ 1.341 - send_data *data = (send_data*)Device->ExtraData; 1.342 - // expand array if necessary 1.343 - if (data->numContexts >= data->maxContexts){ 1.344 - ALuint newMaxContexts = data->maxContexts*2 + 1; 1.345 - data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); 1.346 - data->maxContexts = newMaxContexts; 1.347 - } 1.348 - // create context_data and add it to the main array 1.349 - context_data *ctxData; 1.350 - ctxData = (context_data*)calloc(1, sizeof(*ctxData)); 1.351 - ctxData->renderBuffer = 1.352 - malloc(BytesFromDevFmt(Device->FmtType) * 1.353 - Device->NumChan * Device->UpdateSize); 1.354 - ctxData->ctx = context; 1.355 - 1.356 - data->contexts[data->numContexts] = ctxData; 1.357 - data->numContexts++; 1.358 -} 1.359 -#+end_src 1.360 - 1.361 -Here, the slave context is created, and it's data is stored in the 1.362 -device-wide =ExtraData= structure. The =renderBuffer= that is created 1.363 -here is where the rendered sound samples for this slave context will 1.364 -eventually go. 1.365 - 1.366 -** Context Switching 1.367 -#+name: context-switching 1.368 -#+begin_src C 1.369 -//////////////////// Context Switching 1.370 - 1.371 -/* A device brings along with it two pieces of state 1.372 - * which have to be swapped in and out with each context. 1.373 - */ 1.374 -static void swapInContext(ALCdevice *Device, context_data *ctxData){ 1.375 - memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.376 - memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.377 -} 1.378 - 1.379 -static void saveContext(ALCdevice *Device, context_data *ctxData){ 1.380 - memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.381 - memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.382 -} 1.383 - 1.384 -static ALCcontext **currentContext; 1.385 -static ALuint currentNumContext; 1.386 - 1.387 -/* By default, all contexts are rendered at once for each call to aluMixData. 1.388 - * This function uses the internals of the ALCdevice struct to temporally 1.389 - * cause aluMixData to only render the chosen context. 1.390 - */ 1.391 -static void limitContext(ALCdevice *Device, ALCcontext *ctx){ 1.392 - currentContext = Device->Contexts; 1.393 - currentNumContext = Device->NumContexts; 1.394 - Device->Contexts = &ctx; 1.395 - Device->NumContexts = 1; 1.396 -} 1.397 - 1.398 -static void unLimitContext(ALCdevice *Device){ 1.399 - Device->Contexts = currentContext; 1.400 - Device->NumContexts = currentNumContext; 1.401 -} 1.402 -#+end_src 1.403 - 1.404 -=OpenAL= normally renders all Contexts in parallel, outputting the 1.405 -whole result to the buffer. It does this by iterating over the 1.406 -Device->Contexts array and rendering each context to the buffer in 1.407 -turn. By temporally setting Device->NumContexts to 1 and adjusting 1.408 -the Device's context list to put the desired context-to-be-rendered 1.409 -into position 0, we can get trick =OpenAL= into rendering each slave 1.410 -context separate from all the others. 1.411 - 1.412 -** Main Device Loop 1.413 -#+name: main-loop 1.414 -#+begin_src C 1.415 -//////////////////// Main Device Loop 1.416 - 1.417 -/* Establish the LWJGL context as the master context, which will 1.418 - * be synchronized to all the slave contexts 1.419 - */ 1.420 -static void init(ALCdevice *Device){ 1.421 - ALCcontext *masterContext = alcGetCurrentContext(); 1.422 - addContext(Device, masterContext); 1.423 -} 1.424 - 1.425 - 1.426 -static void renderData(ALCdevice *Device, int samples){ 1.427 - if(!Device->Connected){return;} 1.428 - send_data *data = (send_data*)Device->ExtraData; 1.429 - ALCcontext *current = alcGetCurrentContext(); 1.430 - 1.431 - ALuint i; 1.432 - for (i = 1; i < data->numContexts; i++){ 1.433 - syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); 1.434 - } 1.435 - 1.436 - if ((ALuint) samples > Device->UpdateSize){ 1.437 - printf("exceeding internal buffer size; dropping samples\n"); 1.438 - printf("requested %d; available %d\n", samples, Device->UpdateSize); 1.439 - samples = (int) Device->UpdateSize; 1.440 - } 1.441 - 1.442 - for (i = 0; i < data->numContexts; i++){ 1.443 - context_data *ctxData = data->contexts[i]; 1.444 - ALCcontext *ctx = ctxData->ctx; 1.445 - alcMakeContextCurrent(ctx); 1.446 - limitContext(Device, ctx); 1.447 - swapInContext(Device, ctxData); 1.448 - aluMixData(Device, ctxData->renderBuffer, samples); 1.449 - saveContext(Device, ctxData); 1.450 - unLimitContext(Device); 1.451 - } 1.452 - alcMakeContextCurrent(current); 1.453 -} 1.454 -#+end_src 1.455 - 1.456 -The main loop synchronizes the master LWJGL context with all the slave 1.457 -contexts, then walks each context, rendering just that context to it's 1.458 -audio-sample storage buffer. 1.459 - 1.460 -** JNI Methods 1.461 - 1.462 -At this point, we have the ability to create multiple listeners by 1.463 -using the master/slave context trick, and the rendered audio data is 1.464 -waiting patiently in internal buffers, one for each listener. We need 1.465 -a way to transport this information to Java, and also a way to drive 1.466 -this device from Java. The following JNI interface code is inspired 1.467 -by the way LWJGL interfaces with =OpenAL=. 1.468 - 1.469 -*** step 1.470 -#+name: jni-step 1.471 -#+begin_src C 1.472 -//////////////////// JNI Methods 1.473 - 1.474 -#include "com_aurellem_send_AudioSend.h" 1.475 - 1.476 -/* 1.477 - * Class: com_aurellem_send_AudioSend 1.478 - * Method: nstep 1.479 - * Signature: (JI)V 1.480 - */ 1.481 -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep 1.482 -(JNIEnv *env, jclass clazz, jlong device, jint samples){ 1.483 - UNUSED(env);UNUSED(clazz);UNUSED(device); 1.484 - renderData((ALCdevice*)((intptr_t)device), samples); 1.485 -} 1.486 -#+end_src 1.487 -This device, unlike most of the other devices in =OpenAL=, does not 1.488 -render sound unless asked. This enables the system to slow down or 1.489 -speed up depending on the needs of the AIs who are using it to 1.490 -listen. If the device tried to render samples in real-time, a 1.491 -complicated AI whose mind takes 100 seconds of computer time to 1.492 -simulate 1 second of AI-time would miss almost all of the sound in 1.493 -its environment. 1.494 - 1.495 - 1.496 -*** getSamples 1.497 -#+name: jni-get-samples 1.498 -#+begin_src C 1.499 -/* 1.500 - * Class: com_aurellem_send_AudioSend 1.501 - * Method: ngetSamples 1.502 - * Signature: (JLjava/nio/ByteBuffer;III)V 1.503 - */ 1.504 -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples 1.505 -(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, 1.506 - jint samples, jint n){ 1.507 - UNUSED(clazz); 1.508 - 1.509 - ALvoid *buffer_address = 1.510 - ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); 1.511 - ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); 1.512 - send_data *data = (send_data*)recorder->ExtraData; 1.513 - if ((ALuint)n > data->numContexts){return;} 1.514 - memcpy(buffer_address, data->contexts[n]->renderBuffer, 1.515 - BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); 1.516 -} 1.517 -#+end_src 1.518 - 1.519 -This is the transport layer between C and Java that will eventually 1.520 -allow us to access rendered sound data from clojure. 1.521 - 1.522 -*** Listener Management 1.523 - 1.524 -=addListener=, =setNthListenerf=, and =setNthListener3f= are 1.525 -necessary to change the properties of any listener other than the 1.526 -master one, since only the listener of the current active context is 1.527 -affected by the normal =OpenAL= listener calls. 1.528 -#+name: listener-manage 1.529 -#+begin_src C 1.530 -/* 1.531 - * Class: com_aurellem_send_AudioSend 1.532 - * Method: naddListener 1.533 - * Signature: (J)V 1.534 - */ 1.535 -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener 1.536 -(JNIEnv *env, jclass clazz, jlong device){ 1.537 - UNUSED(env); UNUSED(clazz); 1.538 - //printf("creating new context via naddListener\n"); 1.539 - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.540 - ALCcontext *new = alcCreateContext(Device, NULL); 1.541 - addContext(Device, new); 1.542 -} 1.543 - 1.544 -/* 1.545 - * Class: com_aurellem_send_AudioSend 1.546 - * Method: nsetNthListener3f 1.547 - * Signature: (IFFFJI)V 1.548 - */ 1.549 -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f 1.550 - (JNIEnv *env, jclass clazz, jint param, 1.551 - jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ 1.552 - UNUSED(env);UNUSED(clazz); 1.553 - 1.554 - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.555 - send_data *data = (send_data*)Device->ExtraData; 1.556 - 1.557 - ALCcontext *current = alcGetCurrentContext(); 1.558 - if ((ALuint)contextNum > data->numContexts){return;} 1.559 - alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.560 - alListener3f(param, v1, v2, v3); 1.561 - alcMakeContextCurrent(current); 1.562 -} 1.563 - 1.564 -/* 1.565 - * Class: com_aurellem_send_AudioSend 1.566 - * Method: nsetNthListenerf 1.567 - * Signature: (IFJI)V 1.568 - */ 1.569 -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf 1.570 -(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, 1.571 - jint contextNum){ 1.572 - 1.573 - UNUSED(env);UNUSED(clazz); 1.574 - 1.575 - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.576 - send_data *data = (send_data*)Device->ExtraData; 1.577 - 1.578 - ALCcontext *current = alcGetCurrentContext(); 1.579 - if ((ALuint)contextNum > data->numContexts){return;} 1.580 - alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.581 - alListenerf(param, v1); 1.582 - alcMakeContextCurrent(current); 1.583 -} 1.584 -#+end_src 1.585 - 1.586 -*** Initialization 1.587 -=initDevice= is called from the Java side after LWJGL has created its 1.588 -context, and before any calls to =addListener=. It establishes the 1.589 -LWJGL context as the master context. 1.590 - 1.591 -=getAudioFormat= is a convenience function that uses JNI to build up a 1.592 -=javax.sound.sampled.AudioFormat= object from data in the Device. This 1.593 -way, there is no ambiguity about what the bits created by =step= and 1.594 -returned by =getSamples= mean. 1.595 -#+name: jni-init 1.596 -#+begin_src C 1.597 -/* 1.598 - * Class: com_aurellem_send_AudioSend 1.599 - * Method: ninitDevice 1.600 - * Signature: (J)V 1.601 - */ 1.602 -JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice 1.603 -(JNIEnv *env, jclass clazz, jlong device){ 1.604 - UNUSED(env);UNUSED(clazz); 1.605 - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.606 - init(Device); 1.607 -} 1.608 - 1.609 -/* 1.610 - * Class: com_aurellem_send_AudioSend 1.611 - * Method: ngetAudioFormat 1.612 - * Signature: (J)Ljavax/sound/sampled/AudioFormat; 1.613 - */ 1.614 -JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat 1.615 -(JNIEnv *env, jclass clazz, jlong device){ 1.616 - UNUSED(clazz); 1.617 - jclass AudioFormatClass = 1.618 - (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); 1.619 - jmethodID AudioFormatConstructor = 1.620 - (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V"); 1.621 - 1.622 - ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.623 - int isSigned; 1.624 - switch (Device->FmtType) 1.625 - { 1.626 - case DevFmtUByte: 1.627 - case DevFmtUShort: isSigned = 0; break; 1.628 - default : isSigned = 1; 1.629 - } 1.630 - float frequency = Device->Frequency; 1.631 - int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); 1.632 - int channels = Device->NumChan; 1.633 - jobject format = (*env)-> 1.634 - NewObject( 1.635 - env,AudioFormatClass,AudioFormatConstructor, 1.636 - frequency, 1.637 - bitsPerFrame, 1.638 - channels, 1.639 - isSigned, 1.640 - 0); 1.641 - return format; 1.642 -} 1.643 -#+end_src 1.644 - 1.645 -** Boring Device management stuff 1.646 -This code is more-or-less copied verbatim from the other =OpenAL= 1.647 -backends. It's the basis for =OpenAL='s primitive object system. 1.648 -#+name: device-init 1.649 -#+begin_src C 1.650 -//////////////////// Device Initialization / Management 1.651 - 1.652 -static const ALCchar sendDevice[] = "Multiple Audio Send"; 1.653 - 1.654 -static ALCboolean send_open_playback(ALCdevice *device, 1.655 - const ALCchar *deviceName) 1.656 -{ 1.657 - send_data *data; 1.658 - // stop any buffering for stdout, so that I can 1.659 - // see the printf statements in my terminal immediately 1.660 - setbuf(stdout, NULL); 1.661 - 1.662 - if(!deviceName) 1.663 - deviceName = sendDevice; 1.664 - else if(strcmp(deviceName, sendDevice) != 0) 1.665 - return ALC_FALSE; 1.666 - data = (send_data*)calloc(1, sizeof(*data)); 1.667 - device->szDeviceName = strdup(deviceName); 1.668 - device->ExtraData = data; 1.669 - return ALC_TRUE; 1.670 -} 1.671 - 1.672 -static void send_close_playback(ALCdevice *device) 1.673 -{ 1.674 - send_data *data = (send_data*)device->ExtraData; 1.675 - alcMakeContextCurrent(NULL); 1.676 - ALuint i; 1.677 - // Destroy all slave contexts. LWJGL will take care of 1.678 - // its own context. 1.679 - for (i = 1; i < data->numContexts; i++){ 1.680 - context_data *ctxData = data->contexts[i]; 1.681 - alcDestroyContext(ctxData->ctx); 1.682 - free(ctxData->renderBuffer); 1.683 - free(ctxData); 1.684 - } 1.685 - free(data); 1.686 - device->ExtraData = NULL; 1.687 -} 1.688 - 1.689 -static ALCboolean send_reset_playback(ALCdevice *device) 1.690 -{ 1.691 - SetDefaultWFXChannelOrder(device); 1.692 - return ALC_TRUE; 1.693 -} 1.694 - 1.695 -static void send_stop_playback(ALCdevice *Device){ 1.696 - UNUSED(Device); 1.697 -} 1.698 - 1.699 -static const BackendFuncs send_funcs = { 1.700 - send_open_playback, 1.701 - send_close_playback, 1.702 - send_reset_playback, 1.703 - send_stop_playback, 1.704 - NULL, 1.705 - NULL, /* These would be filled with functions to */ 1.706 - NULL, /* handle capturing audio if we we into that */ 1.707 - NULL, /* sort of thing... */ 1.708 - NULL, 1.709 - NULL 1.710 -}; 1.711 - 1.712 -ALCboolean alc_send_init(BackendFuncs *func_list){ 1.713 - *func_list = send_funcs; 1.714 - return ALC_TRUE; 1.715 -} 1.716 - 1.717 -void alc_send_deinit(void){} 1.718 - 1.719 -void alc_send_probe(enum DevProbe type) 1.720 -{ 1.721 - switch(type) 1.722 - { 1.723 - case DEVICE_PROBE: 1.724 - AppendDeviceList(sendDevice); 1.725 - break; 1.726 - case ALL_DEVICE_PROBE: 1.727 - AppendAllDeviceList(sendDevice); 1.728 - break; 1.729 - case CAPTURE_DEVICE_PROBE: 1.730 - break; 1.731 - } 1.732 -} 1.733 -#+end_src 1.734 - 1.735 -* The Java interface, =AudioSend= 1.736 - 1.737 -The Java interface to the Send Device follows naturally from the JNI 1.738 -definitions. It is included here for completeness. The only thing here 1.739 -of note is the =deviceID=. This is available from LWJGL, but to only 1.740 -way to get it is reflection. Unfortunately, there is no other way to 1.741 -control the Send device than to obtain a pointer to it. 1.742 - 1.743 -#+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code 1.744 - 1.745 -* Finally, Ears in clojure! 1.746 - 1.747 -Now that the infrastructure is complete (modulo a few patches to 1.748 -jMonkeyEngine3 to support accessing this modified version of =OpenAL= 1.749 -that are not worth discussing), the clojure ear abstraction is rather 1.750 -simple. Just as there were =SceneProcessors= for vision, there are 1.751 -now =SoundProcessors= for hearing. 1.752 - 1.753 -#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java 1.754 - 1.755 - 1.756 -Ears work the same way as vision. 1.757 - 1.758 -(hearing creature) will return [init-functions sensor-functions]. The 1.759 -init functions each take the world and register a SoundProcessor that 1.760 -does foureier transforms on the incommong sound data, making it 1.761 -available to each sensor function. 1.762 - 1.763 - 1.764 -#+name: ears 1.765 -#+begin_src clojure 1.766 -(ns cortex.hearing 1.767 - "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple 1.768 - listeners at different positions in the same world. Automatically 1.769 - reads ear-nodes from specially prepared blender files and 1.770 - instantiates them in the world as actual ears." 1.771 - {:author "Robert McIntyre"} 1.772 - (:use (cortex world util sense)) 1.773 - (:use clojure.contrib.def) 1.774 - (:import java.nio.ByteBuffer) 1.775 - (:import org.tritonus.share.sampled.FloatSampleTools) 1.776 - (:import (com.aurellem.capture.audio 1.777 - SoundProcessor AudioSendRenderer)) 1.778 - (:import javax.sound.sampled.AudioFormat) 1.779 - (:import (com.jme3.scene Spatial Node)) 1.780 - (:import com.jme3.audio.Listener) 1.781 - (:import com.jme3.app.Application) 1.782 - (:import com.jme3.scene.control.AbstractControl)) 1.783 - 1.784 -(defn sound-processor 1.785 - "Deals with converting ByteBuffers into Vectors of floats so that 1.786 - the continuation functions can be defined in terms of immutable 1.787 - stuff." 1.788 - [continuation] 1.789 - (proxy [SoundProcessor] [] 1.790 - (cleanup []) 1.791 - (process 1.792 - [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] 1.793 - (let [bytes (byte-array numSamples) 1.794 - num-floats (/ numSamples (.getFrameSize audioFormat)) 1.795 - floats (float-array num-floats)] 1.796 - (.get audioSamples bytes 0 numSamples) 1.797 - (FloatSampleTools/byte2floatInterleaved 1.798 - bytes 0 floats 0 num-floats audioFormat) 1.799 - (continuation 1.800 - (vec floats)))))) 1.801 - 1.802 -(defvar 1.803 - ^{:arglists '([creature])} 1.804 - ears 1.805 - (sense-nodes "ears") 1.806 - "Return the children of the creature's \"ears\" node.") 1.807 - 1.808 -(defn update-listener-velocity! 1.809 - "Update the listener's velocity every update loop." 1.810 - [#^Spatial obj #^Listener lis] 1.811 - (let [old-position (atom (.getLocation lis))] 1.812 - (.addControl 1.813 - obj 1.814 - (proxy [AbstractControl] [] 1.815 - (controlUpdate [tpf] 1.816 - (let [new-position (.getLocation lis)] 1.817 - (.setVelocity 1.818 - lis 1.819 - (.mult (.subtract new-position @old-position) 1.820 - (float (/ tpf)))) 1.821 - (reset! old-position new-position))) 1.822 - (controlRender [_ _]))))) 1.823 - 1.824 -(defn create-listener! 1.825 - "Create a Listener centered on the current position of 'ear 1.826 - which follows the closest physical node in 'creature and 1.827 - sends sound data to 'continuation." 1.828 - [#^Application world #^Node creature #^Spatial ear continuation] 1.829 - (let [target (closest-node creature ear) 1.830 - lis (Listener.) 1.831 - audio-renderer (.getAudioRenderer world) 1.832 - sp (sound-processor continuation)] 1.833 - (.setLocation lis (.getWorldTranslation ear)) 1.834 - (.setRotation lis (.getWorldRotation ear)) 1.835 - (bind-sense target lis) 1.836 - (update-listener-velocity! target lis) 1.837 - (.addListener audio-renderer lis) 1.838 - (.registerSoundProcessor audio-renderer lis sp))) 1.839 - 1.840 -(defn hearing-fn 1.841 - "Returns a functon which returns auditory sensory data when called 1.842 - inside a running simulation." 1.843 - [#^Node creature #^Spatial ear] 1.844 - (let [hearing-data (atom []) 1.845 - register-listener! 1.846 - (runonce 1.847 - (fn [#^Application world] 1.848 - (create-listener! 1.849 - world creature ear 1.850 - (fn [data] 1.851 - (reset! hearing-data (vec data))))))] 1.852 - (fn [#^Application world] 1.853 - (register-listener! world) 1.854 - (let [data @hearing-data 1.855 - topology 1.856 - (vec (map #(vector % 0) (range 0 (count data)))) 1.857 - scaled-data 1.858 - (vec 1.859 - (map 1.860 - #(rem (int (* 255 (/ (+ 1 %) 2))) 256) 1.861 - data))] 1.862 - [topology scaled-data])))) 1.863 - 1.864 -(defn hearing! 1.865 - "Endow the creature in a particular world with the sense of 1.866 - hearing. Will return a sequence of functions, one for each ear, 1.867 - which when called will return the auditory data from that ear." 1.868 - [#^Node creature] 1.869 - (for [ear (ears creature)] 1.870 - (hearing-fn creature ear))) 1.871 - 1.872 - 1.873 -#+end_src 1.874 - 1.875 -* Example 1.876 - 1.877 -#+name: test-hearing 1.878 -#+begin_src clojure :results silent 1.879 -(ns cortex.test.hearing 1.880 - (:use (cortex world util hearing)) 1.881 - (:import (com.jme3.audio AudioNode Listener)) 1.882 - (:import com.jme3.scene.Node 1.883 - com.jme3.system.AppSettings)) 1.884 - 1.885 -(defn setup-fn [world] 1.886 - (let [listener (Listener.)] 1.887 - (add-ear world listener #(println-repl (nth % 0))))) 1.888 - 1.889 -(defn play-sound [node world value] 1.890 - (if (not value) 1.891 - (do 1.892 - (.playSource (.getAudioRenderer world) node)))) 1.893 - 1.894 -(defn test-basic-hearing [] 1.895 - (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)] 1.896 - (world 1.897 - (Node.) 1.898 - {"key-space" (partial play-sound node1)} 1.899 - setup-fn 1.900 - no-op))) 1.901 - 1.902 -(defn test-advanced-hearing 1.903 - "Testing hearing: 1.904 - You should see a blue sphere flying around several 1.905 - cubes. As the sphere approaches each cube, it turns 1.906 - green." 1.907 - [] 1.908 - (doto (com.aurellem.capture.examples.Advanced.) 1.909 - (.setSettings 1.910 - (doto (AppSettings. true) 1.911 - (.setAudioRenderer "Send"))) 1.912 - (.setShowSettings false) 1.913 - (.setPauseOnLostFocus false))) 1.914 - 1.915 -#+end_src 1.916 - 1.917 -This extremely basic program prints out the first sample it encounters 1.918 -at every time stamp. You can see the rendered sound being printed at 1.919 -the REPL. 1.920 - 1.921 - - As a bonus, this method of capturing audio for AI can also be used 1.922 - to capture perfect audio from a jMonkeyEngine application, for use 1.923 - in demos and the like. 1.924 - 1.925 - 1.926 -* COMMENT Code Generation 1.927 - 1.928 -#+begin_src clojure :tangle ../src/cortex/hearing.clj 1.929 -<<ears>> 1.930 -#+end_src 1.931 - 1.932 -#+begin_src clojure :tangle ../src/cortex/test/hearing.clj 1.933 -<<test-hearing>> 1.934 -#+end_src 1.935 - 1.936 -#+begin_src C :tangle ../../audio-send/Alc/backends/send.c 1.937 -<<send-header>> 1.938 -<<send-state>> 1.939 -<<sync-macros>> 1.940 -<<sync-sources>> 1.941 -<<sync-contexts>> 1.942 -<<context-creation>> 1.943 -<<context-switching>> 1.944 -<<main-loop>> 1.945 -<<jni-step>> 1.946 -<<jni-get-samples>> 1.947 -<<listener-manage>> 1.948 -<<jni-init>> 1.949 -<<device-init>> 1.950 -#+end_src 1.951 - 1.952 -
2.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 2.2 +++ b/org/hearing.org Sat Feb 04 03:38:05 2012 -0700 2.3 @@ -0,0 +1,949 @@ 2.4 +#+title: Simulated Sense of Hearing 2.5 +#+author: Robert McIntyre 2.6 +#+email: rlm@mit.edu 2.7 +#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 2.8 +#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI 2.9 +#+SETUPFILE: ../../aurellem/org/setup.org 2.10 +#+INCLUDE: ../../aurellem/org/level-0.org 2.11 +#+BABEL: :exports both :noweb yes :cache no :mkdirp yes 2.12 + 2.13 +* Hearing 2.14 + 2.15 +I want to be able to place ears in a similar manner to how I place 2.16 +the eyes. I want to be able to place ears in a unique spatial 2.17 +position, and receive as output at every tick the F.F.T. of whatever 2.18 +signals are happening at that point. 2.19 + 2.20 +Hearing is one of the more difficult senses to simulate, because there 2.21 +is less support for obtaining the actual sound data that is processed 2.22 +by jMonkeyEngine3. 2.23 + 2.24 +jMonkeyEngine's sound system works as follows: 2.25 + 2.26 + - jMonkeyEngine uses the =AppSettings= for the particular application 2.27 + to determine what sort of =AudioRenderer= should be used. 2.28 + - although some support is provided for multiple AudioRendering 2.29 + backends, jMonkeyEngine at the time of this writing will either 2.30 + pick no AudioRenderer at all, or the =LwjglAudioRenderer= 2.31 + - jMonkeyEngine tries to figure out what sort of system you're 2.32 + running and extracts the appropriate native libraries. 2.33 + - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game 2.34 + Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] 2.35 + - =OpenAL= calculates the 3D sound localization and feeds a stream of 2.36 + sound to any of various sound output devices with which it knows 2.37 + how to communicate. 2.38 + 2.39 +A consequence of this is that there's no way to access the actual 2.40 +sound data produced by =OpenAL=. Even worse, =OpenAL= only supports 2.41 +one /listener/, which normally isn't a problem for games, but becomes 2.42 +a problem when trying to make multiple AI creatures that can each hear 2.43 +the world from a different perspective. 2.44 + 2.45 +To make many AI creatures in jMonkeyEngine that can each hear the 2.46 +world from their own perspective, it is necessary to go all the way 2.47 +back to =OpenAL= and implement support for simulated hearing there. 2.48 + 2.49 +* Extending =OpenAL= 2.50 +** =OpenAL= Devices 2.51 + 2.52 +=OpenAL= goes to great lengths to support many different systems, all 2.53 +with different sound capabilities and interfaces. It accomplishes this 2.54 +difficult task by providing code for many different sound backends in 2.55 +pseudo-objects called /Devices/. There's a device for the Linux Open 2.56 +Sound System and the Advanced Linux Sound Architecture, there's one 2.57 +for Direct Sound on Windows, there's even one for Solaris. =OpenAL= 2.58 +solves the problem of platform independence by providing all these 2.59 +Devices. 2.60 + 2.61 +Wrapper libraries such as LWJGL are free to examine the system on 2.62 +which they are running and then select an appropriate device for that 2.63 +system. 2.64 + 2.65 +There are also a few "special" devices that don't interface with any 2.66 +particular system. These include the Null Device, which doesn't do 2.67 +anything, and the Wave Device, which writes whatever sound it receives 2.68 +to a file, if everything has been set up correctly when configuring 2.69 +=OpenAL=. 2.70 + 2.71 +Actual mixing of the sound data happens in the Devices, and they are 2.72 +the only point in the sound rendering process where this data is 2.73 +available. 2.74 + 2.75 +Therefore, in order to support multiple listeners, and get the sound 2.76 +data in a form that the AIs can use, it is necessary to create a new 2.77 +Device, which supports this features. 2.78 + 2.79 +** The Send Device 2.80 +Adding a device to OpenAL is rather tricky -- there are five separate 2.81 +files in the =OpenAL= source tree that must be modified to do so. I've 2.82 +documented this process [[./add-new-device.org][here]] for anyone who is interested. 2.83 + 2.84 + 2.85 +Onward to that actual Device! 2.86 + 2.87 +again, my objectives are: 2.88 + 2.89 + - Support Multiple Listeners from jMonkeyEngine3 2.90 + - Get access to the rendered sound data for further processing from 2.91 + clojure. 2.92 + 2.93 +** =send.c= 2.94 + 2.95 +** Header 2.96 +#+name: send-header 2.97 +#+begin_src C 2.98 +#include "config.h" 2.99 +#include <stdlib.h> 2.100 +#include "alMain.h" 2.101 +#include "AL/al.h" 2.102 +#include "AL/alc.h" 2.103 +#include "alSource.h" 2.104 +#include <jni.h> 2.105 + 2.106 +//////////////////// Summary 2.107 + 2.108 +struct send_data; 2.109 +struct context_data; 2.110 + 2.111 +static void addContext(ALCdevice *, ALCcontext *); 2.112 +static void syncContexts(ALCcontext *master, ALCcontext *slave); 2.113 +static void syncSources(ALsource *master, ALsource *slave, 2.114 + ALCcontext *masterCtx, ALCcontext *slaveCtx); 2.115 + 2.116 +static void syncSourcei(ALuint master, ALuint slave, 2.117 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 2.118 +static void syncSourcef(ALuint master, ALuint slave, 2.119 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 2.120 +static void syncSource3f(ALuint master, ALuint slave, 2.121 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 2.122 + 2.123 +static void swapInContext(ALCdevice *, struct context_data *); 2.124 +static void saveContext(ALCdevice *, struct context_data *); 2.125 +static void limitContext(ALCdevice *, ALCcontext *); 2.126 +static void unLimitContext(ALCdevice *); 2.127 + 2.128 +static void init(ALCdevice *); 2.129 +static void renderData(ALCdevice *, int samples); 2.130 + 2.131 +#define UNUSED(x) (void)(x) 2.132 +#+end_src 2.133 + 2.134 +The main idea behind the Send device is to take advantage of the fact 2.135 +that LWJGL only manages one /context/ when using OpenAL. A /context/ 2.136 +is like a container that holds samples and keeps track of where the 2.137 +listener is. In order to support multiple listeners, the Send device 2.138 +identifies the LWJGL context as the master context, and creates any 2.139 +number of slave contexts to represent additional listeners. Every 2.140 +time the device renders sound, it synchronizes every source from the 2.141 +master LWJGL context to the slave contexts. Then, it renders each 2.142 +context separately, using a different listener for each one. The 2.143 +rendered sound is made available via JNI to jMonkeyEngine. 2.144 + 2.145 +To recap, the process is: 2.146 + - Set the LWJGL context as "master" in the =init()= method. 2.147 + - Create any number of additional contexts via =addContext()= 2.148 + - At every call to =renderData()= sync the master context with the 2.149 + slave contexts with =syncContexts()= 2.150 + - =syncContexts()= calls =syncSources()= to sync all the sources 2.151 + which are in the master context. 2.152 + - =limitContext()= and =unLimitContext()= make it possible to render 2.153 + only one context at a time. 2.154 + 2.155 +** Necessary State 2.156 +#+name: send-state 2.157 +#+begin_src C 2.158 +//////////////////// State 2.159 + 2.160 +typedef struct context_data { 2.161 + ALfloat ClickRemoval[MAXCHANNELS]; 2.162 + ALfloat PendingClicks[MAXCHANNELS]; 2.163 + ALvoid *renderBuffer; 2.164 + ALCcontext *ctx; 2.165 +} context_data; 2.166 + 2.167 +typedef struct send_data { 2.168 + ALuint size; 2.169 + context_data **contexts; 2.170 + ALuint numContexts; 2.171 + ALuint maxContexts; 2.172 +} send_data; 2.173 +#+end_src 2.174 + 2.175 +Switching between contexts is not the normal operation of a Device, 2.176 +and one of the problems with doing so is that a Device normally keeps 2.177 +around a few pieces of state such as the =ClickRemoval= array above 2.178 +which will become corrupted if the contexts are not done in 2.179 +parallel. The solution is to create a copy of this normally global 2.180 +device state for each context, and copy it back and forth into and out 2.181 +of the actual device state whenever a context is rendered. 2.182 + 2.183 +** Synchronization Macros 2.184 +#+name: sync-macros 2.185 +#+begin_src C 2.186 +//////////////////// Context Creation / Synchronization 2.187 + 2.188 +#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ 2.189 + void NAME (ALuint sourceID1, ALuint sourceID2, \ 2.190 + ALCcontext *ctx1, ALCcontext *ctx2, \ 2.191 + ALenum param){ \ 2.192 + INIT_EXPR; \ 2.193 + ALCcontext *current = alcGetCurrentContext(); \ 2.194 + alcMakeContextCurrent(ctx1); \ 2.195 + GET_EXPR; \ 2.196 + alcMakeContextCurrent(ctx2); \ 2.197 + SET_EXPR; \ 2.198 + alcMakeContextCurrent(current); \ 2.199 + } 2.200 + 2.201 +#define MAKE_SYNC(NAME, TYPE, GET, SET) \ 2.202 + _MAKE_SYNC(NAME, \ 2.203 + TYPE value, \ 2.204 + GET(sourceID1, param, &value), \ 2.205 + SET(sourceID2, param, value)) 2.206 + 2.207 +#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ 2.208 + _MAKE_SYNC(NAME, \ 2.209 + TYPE value1; TYPE value2; TYPE value3;, \ 2.210 + GET(sourceID1, param, &value1, &value2, &value3), \ 2.211 + SET(sourceID2, param, value1, value2, value3)) 2.212 + 2.213 +MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); 2.214 +MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); 2.215 +MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); 2.216 +MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); 2.217 + 2.218 +#+end_src 2.219 + 2.220 +Setting the state of an =OpenAL= source is done with the =alSourcei=, 2.221 +=alSourcef=, =alSource3i=, and =alSource3f= functions. In order to 2.222 +completely synchronize two sources, it is necessary to use all of 2.223 +them. These macros help to condense the otherwise repetitive 2.224 +synchronization code involving these similar low-level =OpenAL= functions. 2.225 + 2.226 +** Source Synchronization 2.227 +#+name: sync-sources 2.228 +#+begin_src C 2.229 +void syncSources(ALsource *masterSource, ALsource *slaveSource, 2.230 + ALCcontext *masterCtx, ALCcontext *slaveCtx){ 2.231 + ALuint master = masterSource->source; 2.232 + ALuint slave = slaveSource->source; 2.233 + ALCcontext *current = alcGetCurrentContext(); 2.234 + 2.235 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); 2.236 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); 2.237 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); 2.238 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); 2.239 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); 2.240 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); 2.241 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); 2.242 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); 2.243 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); 2.244 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); 2.245 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); 2.246 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); 2.247 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); 2.248 + 2.249 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); 2.250 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); 2.251 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); 2.252 + 2.253 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); 2.254 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); 2.255 + 2.256 + alcMakeContextCurrent(masterCtx); 2.257 + ALint source_type; 2.258 + alGetSourcei(master, AL_SOURCE_TYPE, &source_type); 2.259 + 2.260 + // Only static sources are currently synchronized! 2.261 + if (AL_STATIC == source_type){ 2.262 + ALint master_buffer; 2.263 + ALint slave_buffer; 2.264 + alGetSourcei(master, AL_BUFFER, &master_buffer); 2.265 + alcMakeContextCurrent(slaveCtx); 2.266 + alGetSourcei(slave, AL_BUFFER, &slave_buffer); 2.267 + if (master_buffer != slave_buffer){ 2.268 + alSourcei(slave, AL_BUFFER, master_buffer); 2.269 + } 2.270 + } 2.271 + 2.272 + // Synchronize the state of the two sources. 2.273 + alcMakeContextCurrent(masterCtx); 2.274 + ALint masterState; 2.275 + ALint slaveState; 2.276 + 2.277 + alGetSourcei(master, AL_SOURCE_STATE, &masterState); 2.278 + alcMakeContextCurrent(slaveCtx); 2.279 + alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); 2.280 + 2.281 + if (masterState != slaveState){ 2.282 + switch (masterState){ 2.283 + case AL_INITIAL : alSourceRewind(slave); break; 2.284 + case AL_PLAYING : alSourcePlay(slave); break; 2.285 + case AL_PAUSED : alSourcePause(slave); break; 2.286 + case AL_STOPPED : alSourceStop(slave); break; 2.287 + } 2.288 + } 2.289 + // Restore whatever context was previously active. 2.290 + alcMakeContextCurrent(current); 2.291 +} 2.292 +#+end_src 2.293 +This function is long because it has to exhaustively go through all the 2.294 +possible state that a source can have and make sure that it is the 2.295 +same between the master and slave sources. I'd like to take this 2.296 +moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very 2.297 +good description of =OpenAL='s internals. 2.298 + 2.299 +** Context Synchronization 2.300 +#+name: sync-contexts 2.301 +#+begin_src C 2.302 +void syncContexts(ALCcontext *master, ALCcontext *slave){ 2.303 + /* If there aren't sufficient sources in slave to mirror 2.304 + the sources in master, create them. */ 2.305 + ALCcontext *current = alcGetCurrentContext(); 2.306 + 2.307 + UIntMap *masterSourceMap = &(master->SourceMap); 2.308 + UIntMap *slaveSourceMap = &(slave->SourceMap); 2.309 + ALuint numMasterSources = masterSourceMap->size; 2.310 + ALuint numSlaveSources = slaveSourceMap->size; 2.311 + 2.312 + alcMakeContextCurrent(slave); 2.313 + if (numSlaveSources < numMasterSources){ 2.314 + ALuint numMissingSources = numMasterSources - numSlaveSources; 2.315 + ALuint newSources[numMissingSources]; 2.316 + alGenSources(numMissingSources, newSources); 2.317 + } 2.318 + 2.319 + /* Now, slave is guaranteed to have at least as many sources 2.320 + as master. Sync each source from master to the corresponding 2.321 + source in slave. */ 2.322 + int i; 2.323 + for(i = 0; i < masterSourceMap->size; i++){ 2.324 + syncSources((ALsource*)masterSourceMap->array[i].value, 2.325 + (ALsource*)slaveSourceMap->array[i].value, 2.326 + master, slave); 2.327 + } 2.328 + alcMakeContextCurrent(current); 2.329 +} 2.330 +#+end_src 2.331 + 2.332 +Most of the hard work in Context Synchronization is done in 2.333 +=syncSources()=. The only thing that =syncContexts()= has to worry 2.334 +about is automatically creating new sources whenever a slave context 2.335 +does not have the same number of sources as the master context. 2.336 + 2.337 +** Context Creation 2.338 +#+name: context-creation 2.339 +#+begin_src C 2.340 +static void addContext(ALCdevice *Device, ALCcontext *context){ 2.341 + send_data *data = (send_data*)Device->ExtraData; 2.342 + // expand array if necessary 2.343 + if (data->numContexts >= data->maxContexts){ 2.344 + ALuint newMaxContexts = data->maxContexts*2 + 1; 2.345 + data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); 2.346 + data->maxContexts = newMaxContexts; 2.347 + } 2.348 + // create context_data and add it to the main array 2.349 + context_data *ctxData; 2.350 + ctxData = (context_data*)calloc(1, sizeof(*ctxData)); 2.351 + ctxData->renderBuffer = 2.352 + malloc(BytesFromDevFmt(Device->FmtType) * 2.353 + Device->NumChan * Device->UpdateSize); 2.354 + ctxData->ctx = context; 2.355 + 2.356 + data->contexts[data->numContexts] = ctxData; 2.357 + data->numContexts++; 2.358 +} 2.359 +#+end_src 2.360 + 2.361 +Here, the slave context is created, and it's data is stored in the 2.362 +device-wide =ExtraData= structure. The =renderBuffer= that is created 2.363 +here is where the rendered sound samples for this slave context will 2.364 +eventually go. 2.365 + 2.366 +** Context Switching 2.367 +#+name: context-switching 2.368 +#+begin_src C 2.369 +//////////////////// Context Switching 2.370 + 2.371 +/* A device brings along with it two pieces of state 2.372 + * which have to be swapped in and out with each context. 2.373 + */ 2.374 +static void swapInContext(ALCdevice *Device, context_data *ctxData){ 2.375 + memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 2.376 + memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 2.377 +} 2.378 + 2.379 +static void saveContext(ALCdevice *Device, context_data *ctxData){ 2.380 + memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 2.381 + memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 2.382 +} 2.383 + 2.384 +static ALCcontext **currentContext; 2.385 +static ALuint currentNumContext; 2.386 + 2.387 +/* By default, all contexts are rendered at once for each call to aluMixData. 2.388 + * This function uses the internals of the ALCdevice struct to temporally 2.389 + * cause aluMixData to only render the chosen context. 2.390 + */ 2.391 +static void limitContext(ALCdevice *Device, ALCcontext *ctx){ 2.392 + currentContext = Device->Contexts; 2.393 + currentNumContext = Device->NumContexts; 2.394 + Device->Contexts = &ctx; 2.395 + Device->NumContexts = 1; 2.396 +} 2.397 + 2.398 +static void unLimitContext(ALCdevice *Device){ 2.399 + Device->Contexts = currentContext; 2.400 + Device->NumContexts = currentNumContext; 2.401 +} 2.402 +#+end_src 2.403 + 2.404 +=OpenAL= normally renders all Contexts in parallel, outputting the 2.405 +whole result to the buffer. It does this by iterating over the 2.406 +Device->Contexts array and rendering each context to the buffer in 2.407 +turn. By temporally setting Device->NumContexts to 1 and adjusting 2.408 +the Device's context list to put the desired context-to-be-rendered 2.409 +into position 0, we can get trick =OpenAL= into rendering each slave 2.410 +context separate from all the others. 2.411 + 2.412 +** Main Device Loop 2.413 +#+name: main-loop 2.414 +#+begin_src C 2.415 +//////////////////// Main Device Loop 2.416 + 2.417 +/* Establish the LWJGL context as the master context, which will 2.418 + * be synchronized to all the slave contexts 2.419 + */ 2.420 +static void init(ALCdevice *Device){ 2.421 + ALCcontext *masterContext = alcGetCurrentContext(); 2.422 + addContext(Device, masterContext); 2.423 +} 2.424 + 2.425 + 2.426 +static void renderData(ALCdevice *Device, int samples){ 2.427 + if(!Device->Connected){return;} 2.428 + send_data *data = (send_data*)Device->ExtraData; 2.429 + ALCcontext *current = alcGetCurrentContext(); 2.430 + 2.431 + ALuint i; 2.432 + for (i = 1; i < data->numContexts; i++){ 2.433 + syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); 2.434 + } 2.435 + 2.436 + if ((ALuint) samples > Device->UpdateSize){ 2.437 + printf("exceeding internal buffer size; dropping samples\n"); 2.438 + printf("requested %d; available %d\n", samples, Device->UpdateSize); 2.439 + samples = (int) Device->UpdateSize; 2.440 + } 2.441 + 2.442 + for (i = 0; i < data->numContexts; i++){ 2.443 + context_data *ctxData = data->contexts[i]; 2.444 + ALCcontext *ctx = ctxData->ctx; 2.445 + alcMakeContextCurrent(ctx); 2.446 + limitContext(Device, ctx); 2.447 + swapInContext(Device, ctxData); 2.448 + aluMixData(Device, ctxData->renderBuffer, samples); 2.449 + saveContext(Device, ctxData); 2.450 + unLimitContext(Device); 2.451 + } 2.452 + alcMakeContextCurrent(current); 2.453 +} 2.454 +#+end_src 2.455 + 2.456 +The main loop synchronizes the master LWJGL context with all the slave 2.457 +contexts, then walks each context, rendering just that context to it's 2.458 +audio-sample storage buffer. 2.459 + 2.460 +** JNI Methods 2.461 + 2.462 +At this point, we have the ability to create multiple listeners by 2.463 +using the master/slave context trick, and the rendered audio data is 2.464 +waiting patiently in internal buffers, one for each listener. We need 2.465 +a way to transport this information to Java, and also a way to drive 2.466 +this device from Java. The following JNI interface code is inspired 2.467 +by the way LWJGL interfaces with =OpenAL=. 2.468 + 2.469 +*** step 2.470 +#+name: jni-step 2.471 +#+begin_src C 2.472 +//////////////////// JNI Methods 2.473 + 2.474 +#include "com_aurellem_send_AudioSend.h" 2.475 + 2.476 +/* 2.477 + * Class: com_aurellem_send_AudioSend 2.478 + * Method: nstep 2.479 + * Signature: (JI)V 2.480 + */ 2.481 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep 2.482 +(JNIEnv *env, jclass clazz, jlong device, jint samples){ 2.483 + UNUSED(env);UNUSED(clazz);UNUSED(device); 2.484 + renderData((ALCdevice*)((intptr_t)device), samples); 2.485 +} 2.486 +#+end_src 2.487 +This device, unlike most of the other devices in =OpenAL=, does not 2.488 +render sound unless asked. This enables the system to slow down or 2.489 +speed up depending on the needs of the AIs who are using it to 2.490 +listen. If the device tried to render samples in real-time, a 2.491 +complicated AI whose mind takes 100 seconds of computer time to 2.492 +simulate 1 second of AI-time would miss almost all of the sound in 2.493 +its environment. 2.494 + 2.495 + 2.496 +*** getSamples 2.497 +#+name: jni-get-samples 2.498 +#+begin_src C 2.499 +/* 2.500 + * Class: com_aurellem_send_AudioSend 2.501 + * Method: ngetSamples 2.502 + * Signature: (JLjava/nio/ByteBuffer;III)V 2.503 + */ 2.504 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples 2.505 +(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, 2.506 + jint samples, jint n){ 2.507 + UNUSED(clazz); 2.508 + 2.509 + ALvoid *buffer_address = 2.510 + ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); 2.511 + ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); 2.512 + send_data *data = (send_data*)recorder->ExtraData; 2.513 + if ((ALuint)n > data->numContexts){return;} 2.514 + memcpy(buffer_address, data->contexts[n]->renderBuffer, 2.515 + BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); 2.516 +} 2.517 +#+end_src 2.518 + 2.519 +This is the transport layer between C and Java that will eventually 2.520 +allow us to access rendered sound data from clojure. 2.521 + 2.522 +*** Listener Management 2.523 + 2.524 +=addListener=, =setNthListenerf=, and =setNthListener3f= are 2.525 +necessary to change the properties of any listener other than the 2.526 +master one, since only the listener of the current active context is 2.527 +affected by the normal =OpenAL= listener calls. 2.528 +#+name: listener-manage 2.529 +#+begin_src C 2.530 +/* 2.531 + * Class: com_aurellem_send_AudioSend 2.532 + * Method: naddListener 2.533 + * Signature: (J)V 2.534 + */ 2.535 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener 2.536 +(JNIEnv *env, jclass clazz, jlong device){ 2.537 + UNUSED(env); UNUSED(clazz); 2.538 + //printf("creating new context via naddListener\n"); 2.539 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 2.540 + ALCcontext *new = alcCreateContext(Device, NULL); 2.541 + addContext(Device, new); 2.542 +} 2.543 + 2.544 +/* 2.545 + * Class: com_aurellem_send_AudioSend 2.546 + * Method: nsetNthListener3f 2.547 + * Signature: (IFFFJI)V 2.548 + */ 2.549 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f 2.550 + (JNIEnv *env, jclass clazz, jint param, 2.551 + jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ 2.552 + UNUSED(env);UNUSED(clazz); 2.553 + 2.554 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 2.555 + send_data *data = (send_data*)Device->ExtraData; 2.556 + 2.557 + ALCcontext *current = alcGetCurrentContext(); 2.558 + if ((ALuint)contextNum > data->numContexts){return;} 2.559 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 2.560 + alListener3f(param, v1, v2, v3); 2.561 + alcMakeContextCurrent(current); 2.562 +} 2.563 + 2.564 +/* 2.565 + * Class: com_aurellem_send_AudioSend 2.566 + * Method: nsetNthListenerf 2.567 + * Signature: (IFJI)V 2.568 + */ 2.569 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf 2.570 +(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, 2.571 + jint contextNum){ 2.572 + 2.573 + UNUSED(env);UNUSED(clazz); 2.574 + 2.575 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 2.576 + send_data *data = (send_data*)Device->ExtraData; 2.577 + 2.578 + ALCcontext *current = alcGetCurrentContext(); 2.579 + if ((ALuint)contextNum > data->numContexts){return;} 2.580 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 2.581 + alListenerf(param, v1); 2.582 + alcMakeContextCurrent(current); 2.583 +} 2.584 +#+end_src 2.585 + 2.586 +*** Initialization 2.587 +=initDevice= is called from the Java side after LWJGL has created its 2.588 +context, and before any calls to =addListener=. It establishes the 2.589 +LWJGL context as the master context. 2.590 + 2.591 +=getAudioFormat= is a convenience function that uses JNI to build up a 2.592 +=javax.sound.sampled.AudioFormat= object from data in the Device. This 2.593 +way, there is no ambiguity about what the bits created by =step= and 2.594 +returned by =getSamples= mean. 2.595 +#+name: jni-init 2.596 +#+begin_src C 2.597 +/* 2.598 + * Class: com_aurellem_send_AudioSend 2.599 + * Method: ninitDevice 2.600 + * Signature: (J)V 2.601 + */ 2.602 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice 2.603 +(JNIEnv *env, jclass clazz, jlong device){ 2.604 + UNUSED(env);UNUSED(clazz); 2.605 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 2.606 + init(Device); 2.607 +} 2.608 + 2.609 +/* 2.610 + * Class: com_aurellem_send_AudioSend 2.611 + * Method: ngetAudioFormat 2.612 + * Signature: (J)Ljavax/sound/sampled/AudioFormat; 2.613 + */ 2.614 +JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat 2.615 +(JNIEnv *env, jclass clazz, jlong device){ 2.616 + UNUSED(clazz); 2.617 + jclass AudioFormatClass = 2.618 + (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); 2.619 + jmethodID AudioFormatConstructor = 2.620 + (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V"); 2.621 + 2.622 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 2.623 + int isSigned; 2.624 + switch (Device->FmtType) 2.625 + { 2.626 + case DevFmtUByte: 2.627 + case DevFmtUShort: isSigned = 0; break; 2.628 + default : isSigned = 1; 2.629 + } 2.630 + float frequency = Device->Frequency; 2.631 + int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); 2.632 + int channels = Device->NumChan; 2.633 + jobject format = (*env)-> 2.634 + NewObject( 2.635 + env,AudioFormatClass,AudioFormatConstructor, 2.636 + frequency, 2.637 + bitsPerFrame, 2.638 + channels, 2.639 + isSigned, 2.640 + 0); 2.641 + return format; 2.642 +} 2.643 +#+end_src 2.644 + 2.645 +** Boring Device management stuff 2.646 +This code is more-or-less copied verbatim from the other =OpenAL= 2.647 +backends. It's the basis for =OpenAL='s primitive object system. 2.648 +#+name: device-init 2.649 +#+begin_src C 2.650 +//////////////////// Device Initialization / Management 2.651 + 2.652 +static const ALCchar sendDevice[] = "Multiple Audio Send"; 2.653 + 2.654 +static ALCboolean send_open_playback(ALCdevice *device, 2.655 + const ALCchar *deviceName) 2.656 +{ 2.657 + send_data *data; 2.658 + // stop any buffering for stdout, so that I can 2.659 + // see the printf statements in my terminal immediately 2.660 + setbuf(stdout, NULL); 2.661 + 2.662 + if(!deviceName) 2.663 + deviceName = sendDevice; 2.664 + else if(strcmp(deviceName, sendDevice) != 0) 2.665 + return ALC_FALSE; 2.666 + data = (send_data*)calloc(1, sizeof(*data)); 2.667 + device->szDeviceName = strdup(deviceName); 2.668 + device->ExtraData = data; 2.669 + return ALC_TRUE; 2.670 +} 2.671 + 2.672 +static void send_close_playback(ALCdevice *device) 2.673 +{ 2.674 + send_data *data = (send_data*)device->ExtraData; 2.675 + alcMakeContextCurrent(NULL); 2.676 + ALuint i; 2.677 + // Destroy all slave contexts. LWJGL will take care of 2.678 + // its own context. 2.679 + for (i = 1; i < data->numContexts; i++){ 2.680 + context_data *ctxData = data->contexts[i]; 2.681 + alcDestroyContext(ctxData->ctx); 2.682 + free(ctxData->renderBuffer); 2.683 + free(ctxData); 2.684 + } 2.685 + free(data); 2.686 + device->ExtraData = NULL; 2.687 +} 2.688 + 2.689 +static ALCboolean send_reset_playback(ALCdevice *device) 2.690 +{ 2.691 + SetDefaultWFXChannelOrder(device); 2.692 + return ALC_TRUE; 2.693 +} 2.694 + 2.695 +static void send_stop_playback(ALCdevice *Device){ 2.696 + UNUSED(Device); 2.697 +} 2.698 + 2.699 +static const BackendFuncs send_funcs = { 2.700 + send_open_playback, 2.701 + send_close_playback, 2.702 + send_reset_playback, 2.703 + send_stop_playback, 2.704 + NULL, 2.705 + NULL, /* These would be filled with functions to */ 2.706 + NULL, /* handle capturing audio if we we into that */ 2.707 + NULL, /* sort of thing... */ 2.708 + NULL, 2.709 + NULL 2.710 +}; 2.711 + 2.712 +ALCboolean alc_send_init(BackendFuncs *func_list){ 2.713 + *func_list = send_funcs; 2.714 + return ALC_TRUE; 2.715 +} 2.716 + 2.717 +void alc_send_deinit(void){} 2.718 + 2.719 +void alc_send_probe(enum DevProbe type) 2.720 +{ 2.721 + switch(type) 2.722 + { 2.723 + case DEVICE_PROBE: 2.724 + AppendDeviceList(sendDevice); 2.725 + break; 2.726 + case ALL_DEVICE_PROBE: 2.727 + AppendAllDeviceList(sendDevice); 2.728 + break; 2.729 + case CAPTURE_DEVICE_PROBE: 2.730 + break; 2.731 + } 2.732 +} 2.733 +#+end_src 2.734 + 2.735 +* The Java interface, =AudioSend= 2.736 + 2.737 +The Java interface to the Send Device follows naturally from the JNI 2.738 +definitions. It is included here for completeness. The only thing here 2.739 +of note is the =deviceID=. This is available from LWJGL, but to only 2.740 +way to get it is reflection. Unfortunately, there is no other way to 2.741 +control the Send device than to obtain a pointer to it. 2.742 + 2.743 +#+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code 2.744 + 2.745 +* Finally, Ears in clojure! 2.746 + 2.747 +Now that the infrastructure is complete (modulo a few patches to 2.748 +jMonkeyEngine3 to support accessing this modified version of =OpenAL= 2.749 +that are not worth discussing), the clojure ear abstraction is rather 2.750 +simple. Just as there were =SceneProcessors= for vision, there are 2.751 +now =SoundProcessors= for hearing. 2.752 + 2.753 +#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java 2.754 + 2.755 + 2.756 +Ears work the same way as vision. 2.757 + 2.758 +(hearing creature) will return [init-functions sensor-functions]. The 2.759 +init functions each take the world and register a SoundProcessor that 2.760 +does foureier transforms on the incommong sound data, making it 2.761 +available to each sensor function. 2.762 + 2.763 + 2.764 +#+name: ears 2.765 +#+begin_src clojure 2.766 +(ns cortex.hearing 2.767 + "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple 2.768 + listeners at different positions in the same world. Automatically 2.769 + reads ear-nodes from specially prepared blender files and 2.770 + instantiates them in the world as actual ears." 2.771 + {:author "Robert McIntyre"} 2.772 + (:use (cortex world util sense)) 2.773 + (:use clojure.contrib.def) 2.774 + (:import java.nio.ByteBuffer) 2.775 + (:import org.tritonus.share.sampled.FloatSampleTools) 2.776 + (:import (com.aurellem.capture.audio 2.777 + SoundProcessor AudioSendRenderer)) 2.778 + (:import javax.sound.sampled.AudioFormat) 2.779 + (:import (com.jme3.scene Spatial Node)) 2.780 + (:import com.jme3.audio.Listener) 2.781 + (:import com.jme3.app.Application) 2.782 + (:import com.jme3.scene.control.AbstractControl)) 2.783 + 2.784 +(defn sound-processor 2.785 + "Deals with converting ByteBuffers into Vectors of floats so that 2.786 + the continuation functions can be defined in terms of immutable 2.787 + stuff." 2.788 + [continuation] 2.789 + (proxy [SoundProcessor] [] 2.790 + (cleanup []) 2.791 + (process 2.792 + [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] 2.793 + (let [bytes (byte-array numSamples) 2.794 + num-floats (/ numSamples (.getFrameSize audioFormat)) 2.795 + floats (float-array num-floats)] 2.796 + (.get audioSamples bytes 0 numSamples) 2.797 + (FloatSampleTools/byte2floatInterleaved 2.798 + bytes 0 floats 0 num-floats audioFormat) 2.799 + (continuation 2.800 + (vec floats)))))) 2.801 + 2.802 +(defvar 2.803 + ^{:arglists '([creature])} 2.804 + ears 2.805 + (sense-nodes "ears") 2.806 + "Return the children of the creature's \"ears\" node.") 2.807 + 2.808 +(defn update-listener-velocity! 2.809 + "Update the listener's velocity every update loop." 2.810 + [#^Spatial obj #^Listener lis] 2.811 + (let [old-position (atom (.getLocation lis))] 2.812 + (.addControl 2.813 + obj 2.814 + (proxy [AbstractControl] [] 2.815 + (controlUpdate [tpf] 2.816 + (let [new-position (.getLocation lis)] 2.817 + (.setVelocity 2.818 + lis 2.819 + (.mult (.subtract new-position @old-position) 2.820 + (float (/ tpf)))) 2.821 + (reset! old-position new-position))) 2.822 + (controlRender [_ _]))))) 2.823 + 2.824 +(defn create-listener! 2.825 + "Create a Listener centered on the current position of 'ear 2.826 + which follows the closest physical node in 'creature and 2.827 + sends sound data to 'continuation." 2.828 + [#^Application world #^Node creature #^Spatial ear continuation] 2.829 + (let [target (closest-node creature ear) 2.830 + lis (Listener.) 2.831 + audio-renderer (.getAudioRenderer world) 2.832 + sp (sound-processor continuation)] 2.833 + (.setLocation lis (.getWorldTranslation ear)) 2.834 + (.setRotation lis (.getWorldRotation ear)) 2.835 + (bind-sense target lis) 2.836 + (update-listener-velocity! target lis) 2.837 + (.addListener audio-renderer lis) 2.838 + (.registerSoundProcessor audio-renderer lis sp))) 2.839 + 2.840 +(defn hearing-fn 2.841 + "Returns a functon which returns auditory sensory data when called 2.842 + inside a running simulation." 2.843 + [#^Node creature #^Spatial ear] 2.844 + (let [hearing-data (atom []) 2.845 + register-listener! 2.846 + (runonce 2.847 + (fn [#^Application world] 2.848 + (create-listener! 2.849 + world creature ear 2.850 + (fn [data] 2.851 + (reset! hearing-data (vec data))))))] 2.852 + (fn [#^Application world] 2.853 + (register-listener! world) 2.854 + (let [data @hearing-data 2.855 + topology 2.856 + (vec (map #(vector % 0) (range 0 (count data)))) 2.857 + scaled-data 2.858 + (vec 2.859 + (map 2.860 + #(rem (int (* 255 (/ (+ 1 %) 2))) 256) 2.861 + data))] 2.862 + [topology scaled-data])))) 2.863 + 2.864 +(defn hearing! 2.865 + "Endow the creature in a particular world with the sense of 2.866 + hearing. Will return a sequence of functions, one for each ear, 2.867 + which when called will return the auditory data from that ear." 2.868 + [#^Node creature] 2.869 + (for [ear (ears creature)] 2.870 + (hearing-fn creature ear))) 2.871 + 2.872 + 2.873 +#+end_src 2.874 + 2.875 +* Example 2.876 + 2.877 +#+name: test-hearing 2.878 +#+begin_src clojure :results silent 2.879 +(ns cortex.test.hearing 2.880 + (:use (cortex world util hearing)) 2.881 + (:import (com.jme3.audio AudioNode Listener)) 2.882 + (:import com.jme3.scene.Node 2.883 + com.jme3.system.AppSettings)) 2.884 + 2.885 +(defn setup-fn [world] 2.886 + (let [listener (Listener.)] 2.887 + (add-ear world listener #(println-repl (nth % 0))))) 2.888 + 2.889 +(defn play-sound [node world value] 2.890 + (if (not value) 2.891 + (do 2.892 + (.playSource (.getAudioRenderer world) node)))) 2.893 + 2.894 +(defn test-basic-hearing [] 2.895 + (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)] 2.896 + (world 2.897 + (Node.) 2.898 + {"key-space" (partial play-sound node1)} 2.899 + setup-fn 2.900 + no-op))) 2.901 + 2.902 +(defn test-advanced-hearing 2.903 + "Testing hearing: 2.904 + You should see a blue sphere flying around several 2.905 + cubes. As the sphere approaches each cube, it turns 2.906 + green." 2.907 + [] 2.908 + (doto (com.aurellem.capture.examples.Advanced.) 2.909 + (.setSettings 2.910 + (doto (AppSettings. true) 2.911 + (.setAudioRenderer "Send"))) 2.912 + (.setShowSettings false) 2.913 + (.setPauseOnLostFocus false))) 2.914 + 2.915 +#+end_src 2.916 + 2.917 +This extremely basic program prints out the first sample it encounters 2.918 +at every time stamp. You can see the rendered sound being printed at 2.919 +the REPL. 2.920 + 2.921 + - As a bonus, this method of capturing audio for AI can also be used 2.922 + to capture perfect audio from a jMonkeyEngine application, for use 2.923 + in demos and the like. 2.924 + 2.925 + 2.926 +* COMMENT Code Generation 2.927 + 2.928 +#+begin_src clojure :tangle ../src/cortex/hearing.clj 2.929 +<<ears>> 2.930 +#+end_src 2.931 + 2.932 +#+begin_src clojure :tangle ../src/cortex/test/hearing.clj 2.933 +<<test-hearing>> 2.934 +#+end_src 2.935 + 2.936 +#+begin_src C :tangle ../../audio-send/Alc/backends/send.c 2.937 +<<send-header>> 2.938 +<<send-state>> 2.939 +<<sync-macros>> 2.940 +<<sync-sources>> 2.941 +<<sync-contexts>> 2.942 +<<context-creation>> 2.943 +<<context-switching>> 2.944 +<<main-loop>> 2.945 +<<jni-step>> 2.946 +<<jni-get-samples>> 2.947 +<<listener-manage>> 2.948 +<<jni-init>> 2.949 +<<device-init>> 2.950 +#+end_src 2.951 + 2.952 +