Mercurial > cortex
view org/hearing.org @ 326:e5636b1740f8
add GPL v2 license
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Tue, 26 Feb 2013 16:29:04 +0000 |
parents | 702b5c78c2de |
children | e6233ef22a80 |
line wrap: on
line source
1 #+title: Simulated Sense of Hearing2 #+author: Robert McIntyre3 #+email: rlm@mit.edu4 #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine35 #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI6 #+SETUPFILE: ../../aurellem/org/setup.org7 #+INCLUDE: ../../aurellem/org/level-0.org8 #+BABEL: :exports both :noweb yes :cache no :mkdirp yes10 * Hearing12 At the end of this post I will have simulated ears that work the same13 way as the simulated eyes in the last post. I will be able to place14 any number of ear-nodes in a blender file, and they will bind to the15 closest physical object and follow it as it moves around. Each ear16 will provide access to the sound data it picks up between every frame.18 Hearing is one of the more difficult senses to simulate, because there19 is less support for obtaining the actual sound data that is processed20 by jMonkeyEngine3. There is no "split-screen" support for rendering21 sound from different points of view, and there is no way to directly22 access the rendered sound data.24 ** Brief Description of jMonkeyEngine's Sound System26 jMonkeyEngine's sound system works as follows:28 - jMonkeyEngine uses the =AppSettings= for the particular application29 to determine what sort of =AudioRenderer= should be used.30 - Although some support is provided for multiple AudioRendering31 backends, jMonkeyEngine at the time of this writing will either32 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.33 - jMonkeyEngine tries to figure out what sort of system you're34 running and extracts the appropriate native libraries.35 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game36 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]37 - =OpenAL= renders the 3D sound and feeds the rendered sound directly38 to any of various sound output devices with which it knows how to39 communicate.41 A consequence of this is that there's no way to access the actual42 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports43 one /listener/ (it renders sound data from only one perspective),44 which normally isn't a problem for games, but becomes a problem when45 trying to make multiple AI creatures that can each hear the world from46 a different perspective.48 To make many AI creatures in jMonkeyEngine that can each hear the49 world from their own perspective, or to make a single creature with50 many ears, it is necessary to go all the way back to =OpenAL= and51 implement support for simulated hearing there.53 * Extending =OpenAL=54 ** =OpenAL= Devices56 =OpenAL= goes to great lengths to support many different systems, all57 with different sound capabilities and interfaces. It accomplishes this58 difficult task by providing code for many different sound backends in59 pseudo-objects called /Devices/. There's a device for the Linux Open60 Sound System and the Advanced Linux Sound Architecture, there's one61 for Direct Sound on Windows, there's even one for Solaris. =OpenAL=62 solves the problem of platform independence by providing all these63 Devices.65 Wrapper libraries such as LWJGL are free to examine the system on66 which they are running and then select an appropriate device for that67 system.69 There are also a few "special" devices that don't interface with any70 particular system. These include the Null Device, which doesn't do71 anything, and the Wave Device, which writes whatever sound it receives72 to a file, if everything has been set up correctly when configuring73 =OpenAL=.75 Actual mixing of the sound data happens in the Devices, and they are76 the only point in the sound rendering process where this data is77 available.79 Therefore, in order to support multiple listeners, and get the sound80 data in a form that the AIs can use, it is necessary to create a new81 Device which supports this features.83 ** The Send Device84 Adding a device to OpenAL is rather tricky -- there are five separate85 files in the =OpenAL= source tree that must be modified to do so. I've86 documented this process [[../../audio-send/html/add-new-device.html][here]] for anyone who is interested.88 Again, my objectives are:90 - Support Multiple Listeners from jMonkeyEngine391 - Get access to the rendered sound data for further processing from92 clojure.94 I named it the "Multiple Audio Send" Device, or =Send= Device for95 short, since it sends audio data back to the calling application like96 an Aux-Send cable on a mixing board.98 Onward to the actual Device!100 ** =send.c=102 ** Header103 #+name: send-header104 #+begin_src C105 #include "config.h"106 #include <stdlib.h>107 #include "alMain.h"108 #include "AL/al.h"109 #include "AL/alc.h"110 #include "alSource.h"111 #include <jni.h>113 //////////////////// Summary115 struct send_data;116 struct context_data;118 static void addContext(ALCdevice *, ALCcontext *);119 static void syncContexts(ALCcontext *master, ALCcontext *slave);120 static void syncSources(ALsource *master, ALsource *slave,121 ALCcontext *masterCtx, ALCcontext *slaveCtx);123 static void syncSourcei(ALuint master, ALuint slave,124 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);125 static void syncSourcef(ALuint master, ALuint slave,126 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);127 static void syncSource3f(ALuint master, ALuint slave,128 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);130 static void swapInContext(ALCdevice *, struct context_data *);131 static void saveContext(ALCdevice *, struct context_data *);132 static void limitContext(ALCdevice *, ALCcontext *);133 static void unLimitContext(ALCdevice *);135 static void init(ALCdevice *);136 static void renderData(ALCdevice *, int samples);138 #define UNUSED(x) (void)(x)139 #+end_src141 The main idea behind the Send device is to take advantage of the fact142 that LWJGL only manages one /context/ when using OpenAL. A /context/143 is like a container that holds samples and keeps track of where the144 listener is. In order to support multiple listeners, the Send device145 identifies the LWJGL context as the master context, and creates any146 number of slave contexts to represent additional listeners. Every147 time the device renders sound, it synchronizes every source from the148 master LWJGL context to the slave contexts. Then, it renders each149 context separately, using a different listener for each one. The150 rendered sound is made available via JNI to jMonkeyEngine.152 To recap, the process is:153 - Set the LWJGL context as "master" in the =init()= method.154 - Create any number of additional contexts via =addContext()=155 - At every call to =renderData()= sync the master context with the156 slave contexts with =syncContexts()=157 - =syncContexts()= calls =syncSources()= to sync all the sources158 which are in the master context.159 - =limitContext()= and =unLimitContext()= make it possible to render160 only one context at a time.162 ** Necessary State163 #+name: send-state164 #+begin_src C165 //////////////////// State167 typedef struct context_data {168 ALfloat ClickRemoval[MAXCHANNELS];169 ALfloat PendingClicks[MAXCHANNELS];170 ALvoid *renderBuffer;171 ALCcontext *ctx;172 } context_data;174 typedef struct send_data {175 ALuint size;176 context_data **contexts;177 ALuint numContexts;178 ALuint maxContexts;179 } send_data;180 #+end_src182 Switching between contexts is not the normal operation of a Device,183 and one of the problems with doing so is that a Device normally keeps184 around a few pieces of state such as the =ClickRemoval= array above185 which will become corrupted if the contexts are not rendered in186 parallel. The solution is to create a copy of this normally global187 device state for each context, and copy it back and forth into and out188 of the actual device state whenever a context is rendered.190 ** Synchronization Macros191 #+name: sync-macros192 #+begin_src C193 //////////////////// Context Creation / Synchronization195 #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \196 void NAME (ALuint sourceID1, ALuint sourceID2, \197 ALCcontext *ctx1, ALCcontext *ctx2, \198 ALenum param){ \199 INIT_EXPR; \200 ALCcontext *current = alcGetCurrentContext(); \201 alcMakeContextCurrent(ctx1); \202 GET_EXPR; \203 alcMakeContextCurrent(ctx2); \204 SET_EXPR; \205 alcMakeContextCurrent(current); \206 }208 #define MAKE_SYNC(NAME, TYPE, GET, SET) \209 _MAKE_SYNC(NAME, \210 TYPE value, \211 GET(sourceID1, param, &value), \212 SET(sourceID2, param, value))214 #define MAKE_SYNC3(NAME, TYPE, GET, SET) \215 _MAKE_SYNC(NAME, \216 TYPE value1; TYPE value2; TYPE value3;, \217 GET(sourceID1, param, &value1, &value2, &value3), \218 SET(sourceID2, param, value1, value2, value3))220 MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei);221 MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef);222 MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i);223 MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f);225 #+end_src227 Setting the state of an =OpenAL= source is done with the =alSourcei=,228 =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to229 completely synchronize two sources, it is necessary to use all of230 them. These macros help to condense the otherwise repetitive231 synchronization code involving these similar low-level =OpenAL= functions.233 ** Source Synchronization234 #+name: sync-sources235 #+begin_src C236 void syncSources(ALsource *masterSource, ALsource *slaveSource,237 ALCcontext *masterCtx, ALCcontext *slaveCtx){238 ALuint master = masterSource->source;239 ALuint slave = slaveSource->source;240 ALCcontext *current = alcGetCurrentContext();242 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);243 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);244 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);245 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);246 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);247 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);248 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);249 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);250 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);251 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);252 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);253 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);254 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);256 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);257 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);258 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);260 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);261 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);263 alcMakeContextCurrent(masterCtx);264 ALint source_type;265 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);267 // Only static sources are currently synchronized!268 if (AL_STATIC == source_type){269 ALint master_buffer;270 ALint slave_buffer;271 alGetSourcei(master, AL_BUFFER, &master_buffer);272 alcMakeContextCurrent(slaveCtx);273 alGetSourcei(slave, AL_BUFFER, &slave_buffer);274 if (master_buffer != slave_buffer){275 alSourcei(slave, AL_BUFFER, master_buffer);276 }277 }279 // Synchronize the state of the two sources.280 alcMakeContextCurrent(masterCtx);281 ALint masterState;282 ALint slaveState;284 alGetSourcei(master, AL_SOURCE_STATE, &masterState);285 alcMakeContextCurrent(slaveCtx);286 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);288 if (masterState != slaveState){289 switch (masterState){290 case AL_INITIAL : alSourceRewind(slave); break;291 case AL_PLAYING : alSourcePlay(slave); break;292 case AL_PAUSED : alSourcePause(slave); break;293 case AL_STOPPED : alSourceStop(slave); break;294 }295 }296 // Restore whatever context was previously active.297 alcMakeContextCurrent(current);298 }299 #+end_src300 This function is long because it has to exhaustively go through all the301 possible state that a source can have and make sure that it is the302 same between the master and slave sources. I'd like to take this303 moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very304 good description of =OpenAL='s internals.306 ** Context Synchronization307 #+name: sync-contexts308 #+begin_src C309 void syncContexts(ALCcontext *master, ALCcontext *slave){310 /* If there aren't sufficient sources in slave to mirror311 the sources in master, create them. */312 ALCcontext *current = alcGetCurrentContext();314 UIntMap *masterSourceMap = &(master->SourceMap);315 UIntMap *slaveSourceMap = &(slave->SourceMap);316 ALuint numMasterSources = masterSourceMap->size;317 ALuint numSlaveSources = slaveSourceMap->size;319 alcMakeContextCurrent(slave);320 if (numSlaveSources < numMasterSources){321 ALuint numMissingSources = numMasterSources - numSlaveSources;322 ALuint newSources[numMissingSources];323 alGenSources(numMissingSources, newSources);324 }326 /* Now, slave is guaranteed to have at least as many sources327 as master. Sync each source from master to the corresponding328 source in slave. */329 int i;330 for(i = 0; i < masterSourceMap->size; i++){331 syncSources((ALsource*)masterSourceMap->array[i].value,332 (ALsource*)slaveSourceMap->array[i].value,333 master, slave);334 }335 alcMakeContextCurrent(current);336 }337 #+end_src339 Most of the hard work in Context Synchronization is done in340 =syncSources()=. The only thing that =syncContexts()= has to worry341 about is automatically creating new sources whenever a slave context342 does not have the same number of sources as the master context.344 ** Context Creation345 #+name: context-creation346 #+begin_src C347 static void addContext(ALCdevice *Device, ALCcontext *context){348 send_data *data = (send_data*)Device->ExtraData;349 // expand array if necessary350 if (data->numContexts >= data->maxContexts){351 ALuint newMaxContexts = data->maxContexts*2 + 1;352 data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data));353 data->maxContexts = newMaxContexts;354 }355 // create context_data and add it to the main array356 context_data *ctxData;357 ctxData = (context_data*)calloc(1, sizeof(*ctxData));358 ctxData->renderBuffer =359 malloc(BytesFromDevFmt(Device->FmtType) *360 Device->NumChan * Device->UpdateSize);361 ctxData->ctx = context;363 data->contexts[data->numContexts] = ctxData;364 data->numContexts++;365 }366 #+end_src368 Here, the slave context is created, and it's data is stored in the369 device-wide =ExtraData= structure. The =renderBuffer= that is created370 here is where the rendered sound samples for this slave context will371 eventually go.373 ** Context Switching374 #+name: context-switching375 #+begin_src C376 //////////////////// Context Switching378 /* A device brings along with it two pieces of state379 * which have to be swapped in and out with each context.380 */381 static void swapInContext(ALCdevice *Device, context_data *ctxData){382 memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);383 memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);384 }386 static void saveContext(ALCdevice *Device, context_data *ctxData){387 memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);388 memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);389 }391 static ALCcontext **currentContext;392 static ALuint currentNumContext;394 /* By default, all contexts are rendered at once for each call to aluMixData.395 * This function uses the internals of the ALCdevice struct to temporally396 * cause aluMixData to only render the chosen context.397 */398 static void limitContext(ALCdevice *Device, ALCcontext *ctx){399 currentContext = Device->Contexts;400 currentNumContext = Device->NumContexts;401 Device->Contexts = &ctx;402 Device->NumContexts = 1;403 }405 static void unLimitContext(ALCdevice *Device){406 Device->Contexts = currentContext;407 Device->NumContexts = currentNumContext;408 }409 #+end_src411 =OpenAL= normally renders all contexts in parallel, outputting the412 whole result to the buffer. It does this by iterating over the413 Device->Contexts array and rendering each context to the buffer in414 turn. By temporally setting Device->NumContexts to 1 and adjusting415 the Device's context list to put the desired context-to-be-rendered416 into position 0, we can get trick =OpenAL= into rendering each context417 separate from all the others.419 ** Main Device Loop420 #+name: main-loop421 #+begin_src C422 //////////////////// Main Device Loop424 /* Establish the LWJGL context as the master context, which will425 * be synchronized to all the slave contexts426 */427 static void init(ALCdevice *Device){428 ALCcontext *masterContext = alcGetCurrentContext();429 addContext(Device, masterContext);430 }432 static void renderData(ALCdevice *Device, int samples){433 if(!Device->Connected){return;}434 send_data *data = (send_data*)Device->ExtraData;435 ALCcontext *current = alcGetCurrentContext();437 ALuint i;438 for (i = 1; i < data->numContexts; i++){439 syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx);440 }442 if ((ALuint) samples > Device->UpdateSize){443 printf("exceeding internal buffer size; dropping samples\n");444 printf("requested %d; available %d\n", samples, Device->UpdateSize);445 samples = (int) Device->UpdateSize;446 }448 for (i = 0; i < data->numContexts; i++){449 context_data *ctxData = data->contexts[i];450 ALCcontext *ctx = ctxData->ctx;451 alcMakeContextCurrent(ctx);452 limitContext(Device, ctx);453 swapInContext(Device, ctxData);454 aluMixData(Device, ctxData->renderBuffer, samples);455 saveContext(Device, ctxData);456 unLimitContext(Device);457 }458 alcMakeContextCurrent(current);459 }460 #+end_src462 The main loop synchronizes the master LWJGL context with all the slave463 contexts, then iterates through each context, rendering just that464 context to it's audio-sample storage buffer.466 ** JNI Methods468 At this point, we have the ability to create multiple listeners by469 using the master/slave context trick, and the rendered audio data is470 waiting patiently in internal buffers, one for each listener. We need471 a way to transport this information to Java, and also a way to drive472 this device from Java. The following JNI interface code is inspired473 by the LWJGL JNI interface to =OpenAL=.475 *** Stepping the Device476 #+name: jni-step477 #+begin_src C478 //////////////////// JNI Methods480 #include "com_aurellem_send_AudioSend.h"482 /*483 * Class: com_aurellem_send_AudioSend484 * Method: nstep485 * Signature: (JI)V486 */487 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep488 (JNIEnv *env, jclass clazz, jlong device, jint samples){489 UNUSED(env);UNUSED(clazz);UNUSED(device);490 renderData((ALCdevice*)((intptr_t)device), samples);491 }492 #+end_src493 This device, unlike most of the other devices in =OpenAL=, does not494 render sound unless asked. This enables the system to slow down or495 speed up depending on the needs of the AIs who are using it to496 listen. If the device tried to render samples in real-time, a497 complicated AI whose mind takes 100 seconds of computer time to498 simulate 1 second of AI-time would miss almost all of the sound in499 its environment.502 *** Device->Java Data Transport503 #+name: jni-get-samples504 #+begin_src C505 /*506 * Class: com_aurellem_send_AudioSend507 * Method: ngetSamples508 * Signature: (JLjava/nio/ByteBuffer;III)V509 */510 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples511 (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position,512 jint samples, jint n){513 UNUSED(clazz);515 ALvoid *buffer_address =516 ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position));517 ALCdevice *recorder = (ALCdevice*) ((intptr_t)device);518 send_data *data = (send_data*)recorder->ExtraData;519 if ((ALuint)n > data->numContexts){return;}520 memcpy(buffer_address, data->contexts[n]->renderBuffer,521 BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples);522 }523 #+end_src525 This is the transport layer between C and Java that will eventually526 allow us to access rendered sound data from clojure.528 *** Listener Management530 =addListener=, =setNthListenerf=, and =setNthListener3f= are531 necessary to change the properties of any listener other than the532 master one, since only the listener of the current active context is533 affected by the normal =OpenAL= listener calls.534 #+name: listener-manage535 #+begin_src C536 /*537 * Class: com_aurellem_send_AudioSend538 * Method: naddListener539 * Signature: (J)V540 */541 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener542 (JNIEnv *env, jclass clazz, jlong device){543 UNUSED(env); UNUSED(clazz);544 //printf("creating new context via naddListener\n");545 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);546 ALCcontext *new = alcCreateContext(Device, NULL);547 addContext(Device, new);548 }550 /*551 * Class: com_aurellem_send_AudioSend552 * Method: nsetNthListener3f553 * Signature: (IFFFJI)V554 */555 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f556 (JNIEnv *env, jclass clazz, jint param,557 jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){558 UNUSED(env);UNUSED(clazz);560 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);561 send_data *data = (send_data*)Device->ExtraData;563 ALCcontext *current = alcGetCurrentContext();564 if ((ALuint)contextNum > data->numContexts){return;}565 alcMakeContextCurrent(data->contexts[contextNum]->ctx);566 alListener3f(param, v1, v2, v3);567 alcMakeContextCurrent(current);568 }570 /*571 * Class: com_aurellem_send_AudioSend572 * Method: nsetNthListenerf573 * Signature: (IFJI)V574 */575 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf576 (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device,577 jint contextNum){579 UNUSED(env);UNUSED(clazz);581 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);582 send_data *data = (send_data*)Device->ExtraData;584 ALCcontext *current = alcGetCurrentContext();585 if ((ALuint)contextNum > data->numContexts){return;}586 alcMakeContextCurrent(data->contexts[contextNum]->ctx);587 alListenerf(param, v1);588 alcMakeContextCurrent(current);589 }590 #+end_src592 *** Initialization593 =initDevice= is called from the Java side after LWJGL has created its594 context, and before any calls to =addListener=. It establishes the595 LWJGL context as the master context.597 =getAudioFormat= is a convenience function that uses JNI to build up a598 =javax.sound.sampled.AudioFormat= object from data in the Device. This599 way, there is no ambiguity about what the bits created by =step= and600 returned by =getSamples= mean.601 #+name: jni-init602 #+begin_src C603 /*604 * Class: com_aurellem_send_AudioSend605 * Method: ninitDevice606 * Signature: (J)V607 */608 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice609 (JNIEnv *env, jclass clazz, jlong device){610 UNUSED(env);UNUSED(clazz);611 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);612 init(Device);613 }615 /*616 * Class: com_aurellem_send_AudioSend617 * Method: ngetAudioFormat618 * Signature: (J)Ljavax/sound/sampled/AudioFormat;619 */620 JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat621 (JNIEnv *env, jclass clazz, jlong device){622 UNUSED(clazz);623 jclass AudioFormatClass =624 (*env)->FindClass(env, "javax/sound/sampled/AudioFormat");625 jmethodID AudioFormatConstructor =626 (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V");628 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);629 int isSigned;630 switch (Device->FmtType)631 {632 case DevFmtUByte:633 case DevFmtUShort: isSigned = 0; break;634 default : isSigned = 1;635 }636 float frequency = Device->Frequency;637 int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType));638 int channels = Device->NumChan;639 jobject format = (*env)->640 NewObject(641 env,AudioFormatClass,AudioFormatConstructor,642 frequency,643 bitsPerFrame,644 channels,645 isSigned,646 0);647 return format;648 }649 #+end_src651 ** Boring Device Management Stuff / Memory Cleanup652 This code is more-or-less copied verbatim from the other =OpenAL=653 Devices. It's the basis for =OpenAL='s primitive object system.654 #+name: device-init655 #+begin_src C656 //////////////////// Device Initialization / Management658 static const ALCchar sendDevice[] = "Multiple Audio Send";660 static ALCboolean send_open_playback(ALCdevice *device,661 const ALCchar *deviceName)662 {663 send_data *data;664 // stop any buffering for stdout, so that I can665 // see the printf statements in my terminal immediately666 setbuf(stdout, NULL);668 if(!deviceName)669 deviceName = sendDevice;670 else if(strcmp(deviceName, sendDevice) != 0)671 return ALC_FALSE;672 data = (send_data*)calloc(1, sizeof(*data));673 device->szDeviceName = strdup(deviceName);674 device->ExtraData = data;675 return ALC_TRUE;676 }678 static void send_close_playback(ALCdevice *device)679 {680 send_data *data = (send_data*)device->ExtraData;681 alcMakeContextCurrent(NULL);682 ALuint i;683 // Destroy all slave contexts. LWJGL will take care of684 // its own context.685 for (i = 1; i < data->numContexts; i++){686 context_data *ctxData = data->contexts[i];687 alcDestroyContext(ctxData->ctx);688 free(ctxData->renderBuffer);689 free(ctxData);690 }691 free(data);692 device->ExtraData = NULL;693 }695 static ALCboolean send_reset_playback(ALCdevice *device)696 {697 SetDefaultWFXChannelOrder(device);698 return ALC_TRUE;699 }701 static void send_stop_playback(ALCdevice *Device){702 UNUSED(Device);703 }705 static const BackendFuncs send_funcs = {706 send_open_playback,707 send_close_playback,708 send_reset_playback,709 send_stop_playback,710 NULL,711 NULL, /* These would be filled with functions to */712 NULL, /* handle capturing audio if we we into that */713 NULL, /* sort of thing... */714 NULL,715 NULL716 };718 ALCboolean alc_send_init(BackendFuncs *func_list){719 *func_list = send_funcs;720 return ALC_TRUE;721 }723 void alc_send_deinit(void){}725 void alc_send_probe(enum DevProbe type)726 {727 switch(type)728 {729 case DEVICE_PROBE:730 AppendDeviceList(sendDevice);731 break;732 case ALL_DEVICE_PROBE:733 AppendAllDeviceList(sendDevice);734 break;735 case CAPTURE_DEVICE_PROBE:736 break;737 }738 }739 #+end_src741 * The Java interface, =AudioSend=743 The Java interface to the Send Device follows naturally from the JNI744 definitions. The only thing here of note is the =deviceID=. This is745 available from LWJGL, but to only way to get it is with reflection.746 Unfortunately, there is no other way to control the Send device than747 to obtain a pointer to it.749 #+include: "../../audio-send/java/src/com/aurellem/send/AudioSend.java" src java751 * The Java Audio Renderer, =AudioSendRenderer=753 #+include: "../../jmeCapture/src/com/aurellem/capture/audio/AudioSendRenderer.java" src java755 The =AudioSendRenderer= is a modified version of the756 =LwjglAudioRenderer= which implements the =MultiListener= interface to757 provide access and creation of more than one =Listener= object.759 ** MultiListener.java761 #+include: "../../jmeCapture/src/com/aurellem/capture/audio/MultiListener.java" src java763 ** SoundProcessors are like SceneProcessors765 A =SoundProcessor= is analogous to a =SceneProcessor=. Every frame, the766 =SoundProcessor= registered with a given =Listener= receives the767 rendered sound data and can do whatever processing it wants with it.769 #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java771 * Finally, Ears in clojure!773 Now that the =C= and =Java= infrastructure is complete, the clojure774 hearing abstraction is simple and closely parallels the [[./vision.org][vision]]775 abstraction.777 ** Hearing Pipeline779 All sound rendering is done in the CPU, so =hearing-pipeline= is780 much less complicated than =vision-pipeline= The bytes available in781 the ByteBuffer obtained from the =send= Device have different meanings782 dependent upon the particular hardware or your system. That is why783 the =AudioFormat= object is necessary to provide the meaning that the784 raw bytes lack. =byteBuffer->pulse-vector= uses the excellent785 conversion facilities from [[http://www.tritonus.org/ ][tritonus]] ([[http://tritonus.sourceforge.net/apidoc/org/tritonus/share/sampled/FloatSampleTools.html#byte2floatInterleaved%2528byte%5B%5D,%2520int,%2520float%5B%5D,%2520int,%2520int,%2520javax.sound.sampled.AudioFormat%2529][javadoc]]) to generate a clojure vector of786 floats which represent the linear PCM encoded waveform of the787 sound. With linear PCM (pulse code modulation) -1.0 represents maximum788 rarefaction of the air while 1.0 represents maximum compression of the789 air at a given instant.791 #+name: hearing-pipeline792 #+begin_src clojure793 (in-ns 'cortex.hearing)795 (defn hearing-pipeline796 "Creates a SoundProcessor which wraps a sound processing797 continuation function. The continuation is a function that takes798 [#^ByteBuffer b #^Integer int numSamples #^AudioFormat af ], each of which799 has already been appropriately sized."800 [continuation]801 (proxy [SoundProcessor] []802 (cleanup [])803 (process804 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]805 (continuation audioSamples numSamples audioFormat))))807 (defn byteBuffer->pulse-vector808 "Extract the sound samples from the byteBuffer as a PCM encoded809 waveform with values ranging from -1.0 to 1.0 into a vector of810 floats."811 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]812 (let [num-floats (/ numSamples (.getFrameSize audioFormat))813 bytes (byte-array numSamples)814 floats (float-array num-floats)]815 (.get audioSamples bytes 0 numSamples)816 (FloatSampleTools/byte2floatInterleaved817 bytes 0 floats 0 num-floats audioFormat)818 (vec floats)))819 #+end_src821 ** Physical Ears823 Together, these three functions define how ears found in a specially824 prepared blender file will be translated to =Listener= objects in a825 simulation. =ears= extracts all the children of to top level node826 named "ears". =add-ear!= and =update-listener-velocity!= use827 =bind-sense= to bind a =Listener= object located at the initial828 position of an "ear" node to the closest physical object in the829 creature. That =Listener= will stay in the same orientation to the830 object with which it is bound, just as the camera in the [[http://aurellem.localhost/cortex/html/sense.html#sec-4-1][sense binding831 demonstration]]. =OpenAL= simulates the Doppler effect for moving832 listeners, =update-listener-velocity!= ensures that this velocity833 information is always up-to-date.835 #+name: hearing-ears836 #+begin_src clojure837 (def838 ^{:doc "Return the children of the creature's \"ears\" node."839 :arglists '([creature])}840 ears841 (sense-nodes "ears"))844 (defn update-listener-velocity!845 "Update the listener's velocity every update loop."846 [#^Spatial obj #^Listener lis]847 (let [old-position (atom (.getLocation lis))]848 (.addControl849 obj850 (proxy [AbstractControl] []851 (controlUpdate [tpf]852 (let [new-position (.getLocation lis)]853 (.setVelocity854 lis855 (.mult (.subtract new-position @old-position)856 (float (/ tpf))))857 (reset! old-position new-position)))858 (controlRender [_ _])))))860 (defn add-ear!861 "Create a Listener centered on the current position of 'ear862 which follows the closest physical node in 'creature and863 sends sound data to 'continuation."864 [#^Application world #^Node creature #^Spatial ear continuation]865 (let [target (closest-node creature ear)866 lis (Listener.)867 audio-renderer (.getAudioRenderer world)868 sp (hearing-pipeline continuation)]869 (.setLocation lis (.getWorldTranslation ear))870 (.setRotation lis (.getWorldRotation ear))871 (bind-sense target lis)872 (update-listener-velocity! target lis)873 (.addListener audio-renderer lis)874 (.registerSoundProcessor audio-renderer lis sp)))875 #+end_src877 ** Ear Creation879 #+name: hearing-kernel880 #+begin_src clojure881 (defn hearing-kernel882 "Returns a function which returns auditory sensory data when called883 inside a running simulation."884 [#^Node creature #^Spatial ear]885 (let [hearing-data (atom [])886 register-listener!887 (runonce888 (fn [#^Application world]889 (add-ear!890 world creature ear891 (comp #(reset! hearing-data %)892 byteBuffer->pulse-vector))))]893 (fn [#^Application world]894 (register-listener! world)895 (let [data @hearing-data896 topology897 (vec (map #(vector % 0) (range 0 (count data))))]898 [topology data]))))900 (defn hearing!901 "Endow the creature in a particular world with the sense of902 hearing. Will return a sequence of functions, one for each ear,903 which when called will return the auditory data from that ear."904 [#^Node creature]905 (for [ear (ears creature)]906 (hearing-kernel creature ear)))907 #+end_src909 Each function returned by =hearing-kernel!= will register a new910 =Listener= with the simulation the first time it is called. Each time911 it is called, the hearing-function will return a vector of linear PCM912 encoded sound data that was heard since the last frame. The size of913 this vector is of course determined by the overall framerate of the914 game. With a constant framerate of 60 frames per second and a sampling915 frequency of 44,100 samples per second, the vector will have exactly916 735 elements.918 ** Visualizing Hearing920 This is a simple visualization function which displays the waveform921 reported by the simulated sense of hearing. It converts the values922 reported in the vector returned by the hearing function from the range923 [-1.0, 1.0] to the range [0 255], converts to integer, and displays924 the number as a greyscale pixel.926 #+name: hearing-display927 #+begin_src clojure928 (in-ns 'cortex.hearing)930 (defn view-hearing931 "Creates a function which accepts a list of auditory data and932 display each element of the list to the screen as an image."933 []934 (view-sense935 (fn [[coords sensor-data]]936 (let [pixel-data937 (vec938 (map939 #(rem (int (* 255 (/ (+ 1 %) 2))) 256)940 sensor-data))941 height 50942 image (BufferedImage. (max 1 (count coords)) height943 BufferedImage/TYPE_INT_RGB)]944 (dorun945 (for [x (range (count coords))]946 (dorun947 (for [y (range height)]948 (let [raw-sensor (pixel-data x)]949 (.setRGB image x y (gray raw-sensor)))))))950 image))))951 #+end_src953 * Testing Hearing954 ** Advanced Java Example956 I wrote a test case in Java that demonstrates the use of the Java957 components of this hearing system. It is part of a larger java library958 to capture perfect Audio from jMonkeyEngine. Some of the clojure959 constructs above are partially reiterated in the java source file. But960 first, the video! As far as I know this is the first instance of961 multiple simulated listeners in a virtual environment using OpenAL.963 #+begin_html964 <div class="figure">965 <center>966 <video controls="controls" width="500">967 <source src="../video/java-hearing-test.ogg" type="video/ogg"968 preload="none" poster="../images/aurellem-1280x480.png" />969 </video>970 <br> <a href="http://www.youtube.com/watch?v=oCEfK0yhDrY"> YouTube </a>971 </center>972 <p>The blue sphere is emitting a constant sound. Each gray box is973 listening for sound, and will change color from gray to green if it974 detects sound which is louder than a certain threshold. As the blue975 sphere travels along the path, it excites each of the cubes in turn.</p>976 </div>977 #+end_html979 #+include "../../jmeCapture/src/com/aurellem/capture/examples/Advanced.java" src java981 Here is a small clojure program to drive the java program and make it982 available as part of my test suite.984 #+name: test-hearing-1985 #+begin_src clojure986 (in-ns 'cortex.test.hearing)988 (defn test-java-hearing989 "Testing hearing:990 You should see a blue sphere flying around several991 cubes. As the sphere approaches each cube, it turns992 green."993 []994 (doto (com.aurellem.capture.examples.Advanced.)995 (.setSettings996 (doto (AppSettings. true)997 (.setAudioRenderer "Send")))998 (.setShowSettings false)999 (.setPauseOnLostFocus false)))1000 #+end_src1002 ** Adding Hearing to the Worm1004 To the worm, I add a new node called "ears" with one child which1005 represents the worm's single ear.1007 #+attr_html: width=7551008 #+caption: The Worm with a newly added nodes describing an ear.1009 [[../images/worm-with-ear.png]]1011 The node highlighted in yellow it the top-level "ears" node. It's1012 child, highlighted in orange, represents a the single ear the creature1013 has. The ear will be localized right above the curved part of the1014 worm's lower hemispherical region opposite the eye.1016 The other empty nodes represent the worm's single joint and eye and are1017 described in [[./body.org][body]] and [[./vision.org][vision]].1019 #+name: test-hearing-21020 #+begin_src clojure1021 (in-ns 'cortex.test.hearing)1023 (defn test-worm-hearing1024 "Testing hearing:1025 You will see the worm fall onto a table. There is a long1026 horizontal bar which shows the waveform of whatever the worm is1027 hearing. When you play a sound, the bar should display a waveform.1029 Keys:1030 <enter> : play sound"1032 ([] (test-worm-hearing false))1033 ([record?]1034 (let [the-worm (doto (worm) (body!))1035 hearing (hearing! the-worm)1036 hearing-display (view-hearing)1038 tone (AudioNode. (asset-manager)1039 "Sounds/pure.wav" false)1041 hymn (AudioNode. (asset-manager)1042 "Sounds/ear-and-eye.wav" false)]1043 (world1044 (nodify [the-worm (floor)])1045 (merge standard-debug-controls1046 {"key-return"1047 (fn [_ value]1048 (if value (.play tone)))1049 "key-l"1050 (fn [_ value]1051 (if value (.play hymn)))})1052 (fn [world]1053 (light-up-everything world)1054 (if record?1055 (do1056 (com.aurellem.capture.Capture/captureVideo1057 world1058 (File."/home/r/proj/cortex/render/worm-audio/frames"))1059 (com.aurellem.capture.Capture/captureAudio1060 world1061 (File."/home/r/proj/cortex/render/worm-audio/audio.wav")))))1063 (fn [world tpf]1064 (hearing-display1065 (map #(% world) hearing)1066 (if record?1067 (File. "/home/r/proj/cortex/render/worm-audio/hearing-data"))))))))1068 #+end_src1070 In this test, I load the worm with its newly formed ear and let it1071 hear sounds. The sound the worm is hearing is localized to the origin1072 of the world, and you can see that as the worm moves farther away from1073 the origin when it is hit by balls, it hears the sound less intensely.1075 The sound you hear in the video is from the worm's perspective. Notice1076 how the pure tone becomes fainter and the visual display of the1077 auditory data becomes less pronounced as the worm falls farther away1078 from the source of the sound.1080 #+begin_html1081 <div class="figure">1082 <center>1083 <video controls="controls" width="600">1084 <source src="../video/worm-hearing.ogg" type="video/ogg"1085 preload="none" poster="../images/aurellem-1280x480.png" />1086 </video>1087 <br> <a href="http://youtu.be/KLUtV1TNksI"> YouTube </a>1088 </center>1089 <p>The worm can now hear the sound pulses produced from the1090 hymn. Notice the strikingly different pattern that human speech1091 makes compared to the instruments. Once the worm is pushed off the1092 floor, the sound it hears is attenuated, and the display of the1093 sound it hears becomes fainter. This shows the 3D localization of1094 sound in this world.</p>1095 </div>1096 #+end_html1098 *** Creating the Ear Video1099 #+name: magick-31100 #+begin_src clojure1101 (ns cortex.video.magick31102 (:import java.io.File)1103 (:use clojure.java.shell))1105 (defn images [path]1106 (sort (rest (file-seq (File. path)))))1108 (def base "/home/r/proj/cortex/render/worm-audio/")1110 (defn pics [file]1111 (images (str base file)))1113 (defn combine-images []1114 (let [main-view (pics "frames")1115 hearing (pics "hearing-data")1116 background (repeat 9001 (File. (str base "background.png")))1117 targets (map1118 #(File. (str base "out/" (format "%07d.png" %)))1119 (range 0 (count main-view)))]1120 (dorun1121 (pmap1122 (comp1123 (fn [[background main-view hearing target]]1124 (println target)1125 (sh "convert"1126 background1127 main-view "-geometry" "+66+21" "-composite"1128 hearing "-geometry" "+21+526" "-composite"1129 target))1130 (fn [& args] (map #(.getCanonicalPath %) args)))1131 background main-view hearing targets))))1132 #+end_src1134 #+begin_src sh :results silent1135 cd /home/r/proj/cortex/render/worm-audio1136 ffmpeg -r 60 -i out/%07d.png -i audio.wav \1137 -b:a 128k -b:v 9001k \1138 -c:a libvorbis -c:v -g 60 libtheora worm-hearing.ogg1139 #+end_src1141 * Headers1143 #+name: hearing-header1144 #+begin_src clojure1145 (ns cortex.hearing1146 "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple1147 listeners at different positions in the same world. Automatically1148 reads ear-nodes from specially prepared blender files and1149 instantiates them in the world as simulated ears."1150 {:author "Robert McIntyre"}1151 (:use (cortex world util sense))1152 (:import java.nio.ByteBuffer)1153 (:import java.awt.image.BufferedImage)1154 (:import org.tritonus.share.sampled.FloatSampleTools)1155 (:import (com.aurellem.capture.audio1156 SoundProcessor AudioSendRenderer))1157 (:import javax.sound.sampled.AudioFormat)1158 (:import (com.jme3.scene Spatial Node))1159 (:import com.jme3.audio.Listener)1160 (:import com.jme3.app.Application)1161 (:import com.jme3.scene.control.AbstractControl))1162 #+end_src1164 #+name: test-header1165 #+begin_src clojure1166 (ns cortex.test.hearing1167 (:use (cortex world util hearing body))1168 (:use cortex.test.body)1169 (:import (com.jme3.audio AudioNode Listener))1170 (:import java.io.File)1171 (:import com.jme3.scene.Node1172 com.jme3.system.AppSettings1173 com.jme3.math.Vector3f))1174 #+end_src1176 * Source Listing1177 - [[../src/cortex/hearing.clj][cortex.hearing]]1178 - [[../src/cortex/test/hearing.clj][cortex.test.hearing]]1179 #+html: <ul> <li> <a href="../org/hearing.org">This org file</a> </li> </ul>1180 - [[http://hg.bortreb.com ][source-repository]]1182 * Next1183 The worm can see and hear, but it can't feel the world or1184 itself. Next post, I'll give the worm a [[./touch.org][sense of touch]].1188 * COMMENT Code Generation1190 #+begin_src clojure :tangle ../src/cortex/hearing.clj1191 <<hearing-header>>1192 <<hearing-pipeline>>1193 <<hearing-ears>>1194 <<hearing-kernel>>1195 <<hearing-display>>1196 #+end_src1198 #+begin_src clojure :tangle ../src/cortex/test/hearing.clj1199 <<test-header>>1200 <<test-hearing-1>>1201 <<test-hearing-2>>1202 #+end_src1204 #+begin_src clojure :tangle ../src/cortex/video/magick3.clj1205 <<magick-3>>1206 #+end_src1208 #+begin_src C :tangle ../../audio-send/Alc/backends/send.c1209 <<send-header>>1210 <<send-state>>1211 <<sync-macros>>1212 <<sync-sources>>1213 <<sync-contexts>>1214 <<context-creation>>1215 <<context-switching>>1216 <<main-loop>>1217 <<jni-step>>1218 <<jni-get-samples>>1219 <<listener-manage>>1220 <<jni-init>>1221 <<device-init>>1222 #+end_src