view org/hearing.org @ 341:2e7d786241d3

now I am satisfied with the tests.
author Robert McIntyre <rlm@mit.edu>
date Sat, 21 Jul 2012 12:20:43 -0500
parents 4f5a5d5f1613
children 516a029e0be9
line wrap: on
line source
1 #+title: Simulated Sense of Hearing
2 #+author: Robert McIntyre
3 #+email: rlm@mit.edu
4 #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3
5 #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI
6 #+SETUPFILE: ../../aurellem/org/setup.org
7 #+INCLUDE: ../../aurellem/org/level-0.org
10 * Hearing
12 At the end of this post I will have simulated ears that work the same
13 way as the simulated eyes in the last post. I will be able to place
14 any number of ear-nodes in a blender file, and they will bind to the
15 closest physical object and follow it as it moves around. Each ear
16 will provide access to the sound data it picks up between every frame.
18 Hearing is one of the more difficult senses to simulate, because there
19 is less support for obtaining the actual sound data that is processed
20 by jMonkeyEngine3. There is no "split-screen" support for rendering
21 sound from different points of view, and there is no way to directly
22 access the rendered sound data.
24 ** Brief Description of jMonkeyEngine's Sound System
26 jMonkeyEngine's sound system works as follows:
28 - jMonkeyEngine uses the =AppSettings= for the particular application
29 to determine what sort of =AudioRenderer= should be used.
30 - Although some support is provided for multiple AudioRendering
31 backends, jMonkeyEngine at the time of this writing will either
32 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
33 - jMonkeyEngine tries to figure out what sort of system you're
34 running and extracts the appropriate native libraries.
35 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
36 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
37 - =OpenAL= renders the 3D sound and feeds the rendered sound directly
38 to any of various sound output devices with which it knows how to
39 communicate.
41 A consequence of this is that there's no way to access the actual
42 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
43 one /listener/ (it renders sound data from only one perspective),
44 which normally isn't a problem for games, but becomes a problem when
45 trying to make multiple AI creatures that can each hear the world from
46 a different perspective.
48 To make many AI creatures in jMonkeyEngine that can each hear the
49 world from their own perspective, or to make a single creature with
50 many ears, it is necessary to go all the way back to =OpenAL= and
51 implement support for simulated hearing there.
53 * Extending =OpenAL=
54 ** =OpenAL= Devices
56 =OpenAL= goes to great lengths to support many different systems, all
57 with different sound capabilities and interfaces. It accomplishes this
58 difficult task by providing code for many different sound backends in
59 pseudo-objects called /Devices/. There's a device for the Linux Open
60 Sound System and the Advanced Linux Sound Architecture, there's one
61 for Direct Sound on Windows, there's even one for Solaris. =OpenAL=
62 solves the problem of platform independence by providing all these
63 Devices.
65 Wrapper libraries such as LWJGL are free to examine the system on
66 which they are running and then select an appropriate device for that
67 system.
69 There are also a few "special" devices that don't interface with any
70 particular system. These include the Null Device, which doesn't do
71 anything, and the Wave Device, which writes whatever sound it receives
72 to a file, if everything has been set up correctly when configuring
73 =OpenAL=.
75 Actual mixing of the sound data happens in the Devices, and they are
76 the only point in the sound rendering process where this data is
77 available.
79 Therefore, in order to support multiple listeners, and get the sound
80 data in a form that the AIs can use, it is necessary to create a new
81 Device which supports this features.
83 ** The Send Device
84 Adding a device to OpenAL is rather tricky -- there are five separate
85 files in the =OpenAL= source tree that must be modified to do so. I've
86 documented this process [[../../audio-send/html/add-new-device.html][here]] for anyone who is interested.
88 Again, my objectives are:
90 - Support Multiple Listeners from jMonkeyEngine3
91 - Get access to the rendered sound data for further processing from
92 clojure.
94 I named it the "Multiple Audio Send" Device, or =Send= Device for
95 short, since it sends audio data back to the calling application like
96 an Aux-Send cable on a mixing board.
98 Onward to the actual Device!
100 ** =send.c=
102 ** Header
103 #+name: send-header
104 #+begin_src C
105 #include "config.h"
106 #include <stdlib.h>
107 #include "alMain.h"
108 #include "AL/al.h"
109 #include "AL/alc.h"
110 #include "alSource.h"
111 #include <jni.h>
113 //////////////////// Summary
115 struct send_data;
116 struct context_data;
118 static void addContext(ALCdevice *, ALCcontext *);
119 static void syncContexts(ALCcontext *master, ALCcontext *slave);
120 static void syncSources(ALsource *master, ALsource *slave,
121 ALCcontext *masterCtx, ALCcontext *slaveCtx);
123 static void syncSourcei(ALuint master, ALuint slave,
124 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
125 static void syncSourcef(ALuint master, ALuint slave,
126 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
127 static void syncSource3f(ALuint master, ALuint slave,
128 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
130 static void swapInContext(ALCdevice *, struct context_data *);
131 static void saveContext(ALCdevice *, struct context_data *);
132 static void limitContext(ALCdevice *, ALCcontext *);
133 static void unLimitContext(ALCdevice *);
135 static void init(ALCdevice *);
136 static void renderData(ALCdevice *, int samples);
138 #define UNUSED(x) (void)(x)
139 #+end_src
141 The main idea behind the Send device is to take advantage of the fact
142 that LWJGL only manages one /context/ when using OpenAL. A /context/
143 is like a container that holds samples and keeps track of where the
144 listener is. In order to support multiple listeners, the Send device
145 identifies the LWJGL context as the master context, and creates any
146 number of slave contexts to represent additional listeners. Every
147 time the device renders sound, it synchronizes every source from the
148 master LWJGL context to the slave contexts. Then, it renders each
149 context separately, using a different listener for each one. The
150 rendered sound is made available via JNI to jMonkeyEngine.
152 To recap, the process is:
153 - Set the LWJGL context as "master" in the =init()= method.
154 - Create any number of additional contexts via =addContext()=
155 - At every call to =renderData()= sync the master context with the
156 slave contexts with =syncContexts()=
157 - =syncContexts()= calls =syncSources()= to sync all the sources
158 which are in the master context.
159 - =limitContext()= and =unLimitContext()= make it possible to render
160 only one context at a time.
162 ** Necessary State
163 #+name: send-state
164 #+begin_src C
165 //////////////////// State
167 typedef struct context_data {
168 ALfloat ClickRemoval[MAXCHANNELS];
169 ALfloat PendingClicks[MAXCHANNELS];
170 ALvoid *renderBuffer;
171 ALCcontext *ctx;
172 } context_data;
174 typedef struct send_data {
175 ALuint size;
176 context_data **contexts;
177 ALuint numContexts;
178 ALuint maxContexts;
179 } send_data;
180 #+end_src
182 Switching between contexts is not the normal operation of a Device,
183 and one of the problems with doing so is that a Device normally keeps
184 around a few pieces of state such as the =ClickRemoval= array above
185 which will become corrupted if the contexts are not rendered in
186 parallel. The solution is to create a copy of this normally global
187 device state for each context, and copy it back and forth into and out
188 of the actual device state whenever a context is rendered.
190 ** Synchronization Macros
191 #+name: sync-macros
192 #+begin_src C
193 //////////////////// Context Creation / Synchronization
195 #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \
196 void NAME (ALuint sourceID1, ALuint sourceID2, \
197 ALCcontext *ctx1, ALCcontext *ctx2, \
198 ALenum param){ \
199 INIT_EXPR; \
200 ALCcontext *current = alcGetCurrentContext(); \
201 alcMakeContextCurrent(ctx1); \
202 GET_EXPR; \
203 alcMakeContextCurrent(ctx2); \
204 SET_EXPR; \
205 alcMakeContextCurrent(current); \
206 }
208 #define MAKE_SYNC(NAME, TYPE, GET, SET) \
209 _MAKE_SYNC(NAME, \
210 TYPE value, \
211 GET(sourceID1, param, &value), \
212 SET(sourceID2, param, value))
214 #define MAKE_SYNC3(NAME, TYPE, GET, SET) \
215 _MAKE_SYNC(NAME, \
216 TYPE value1; TYPE value2; TYPE value3;, \
217 GET(sourceID1, param, &value1, &value2, &value3), \
218 SET(sourceID2, param, value1, value2, value3))
220 MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei);
221 MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef);
222 MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i);
223 MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f);
225 #+end_src
227 Setting the state of an =OpenAL= source is done with the =alSourcei=,
228 =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to
229 completely synchronize two sources, it is necessary to use all of
230 them. These macros help to condense the otherwise repetitive
231 synchronization code involving these similar low-level =OpenAL= functions.
233 ** Source Synchronization
234 #+name: sync-sources
235 #+begin_src C
236 void syncSources(ALsource *masterSource, ALsource *slaveSource,
237 ALCcontext *masterCtx, ALCcontext *slaveCtx){
238 ALuint master = masterSource->source;
239 ALuint slave = slaveSource->source;
240 ALCcontext *current = alcGetCurrentContext();
242 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
243 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
244 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
245 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
246 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
247 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
248 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
249 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
250 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
251 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
252 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
253 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
254 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
256 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
257 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
258 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
260 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
261 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
263 alcMakeContextCurrent(masterCtx);
264 ALint source_type;
265 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
267 // Only static sources are currently synchronized!
268 if (AL_STATIC == source_type){
269 ALint master_buffer;
270 ALint slave_buffer;
271 alGetSourcei(master, AL_BUFFER, &master_buffer);
272 alcMakeContextCurrent(slaveCtx);
273 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
274 if (master_buffer != slave_buffer){
275 alSourcei(slave, AL_BUFFER, master_buffer);
276 }
277 }
279 // Synchronize the state of the two sources.
280 alcMakeContextCurrent(masterCtx);
281 ALint masterState;
282 ALint slaveState;
284 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
285 alcMakeContextCurrent(slaveCtx);
286 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
288 if (masterState != slaveState){
289 switch (masterState){
290 case AL_INITIAL : alSourceRewind(slave); break;
291 case AL_PLAYING : alSourcePlay(slave); break;
292 case AL_PAUSED : alSourcePause(slave); break;
293 case AL_STOPPED : alSourceStop(slave); break;
294 }
295 }
296 // Restore whatever context was previously active.
297 alcMakeContextCurrent(current);
298 }
299 #+end_src
300 This function is long because it has to exhaustively go through all the
301 possible state that a source can have and make sure that it is the
302 same between the master and slave sources. I'd like to take this
303 moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very
304 good description of =OpenAL='s internals.
306 ** Context Synchronization
307 #+name: sync-contexts
308 #+begin_src C
309 void syncContexts(ALCcontext *master, ALCcontext *slave){
310 /* If there aren't sufficient sources in slave to mirror
311 the sources in master, create them. */
312 ALCcontext *current = alcGetCurrentContext();
314 UIntMap *masterSourceMap = &(master->SourceMap);
315 UIntMap *slaveSourceMap = &(slave->SourceMap);
316 ALuint numMasterSources = masterSourceMap->size;
317 ALuint numSlaveSources = slaveSourceMap->size;
319 alcMakeContextCurrent(slave);
320 if (numSlaveSources < numMasterSources){
321 ALuint numMissingSources = numMasterSources - numSlaveSources;
322 ALuint newSources[numMissingSources];
323 alGenSources(numMissingSources, newSources);
324 }
326 /* Now, slave is guaranteed to have at least as many sources
327 as master. Sync each source from master to the corresponding
328 source in slave. */
329 int i;
330 for(i = 0; i < masterSourceMap->size; i++){
331 syncSources((ALsource*)masterSourceMap->array[i].value,
332 (ALsource*)slaveSourceMap->array[i].value,
333 master, slave);
334 }
335 alcMakeContextCurrent(current);
336 }
337 #+end_src
339 Most of the hard work in Context Synchronization is done in
340 =syncSources()=. The only thing that =syncContexts()= has to worry
341 about is automatically creating new sources whenever a slave context
342 does not have the same number of sources as the master context.
344 ** Context Creation
345 #+name: context-creation
346 #+begin_src C
347 static void addContext(ALCdevice *Device, ALCcontext *context){
348 send_data *data = (send_data*)Device->ExtraData;
349 // expand array if necessary
350 if (data->numContexts >= data->maxContexts){
351 ALuint newMaxContexts = data->maxContexts*2 + 1;
352 data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data));
353 data->maxContexts = newMaxContexts;
354 }
355 // create context_data and add it to the main array
356 context_data *ctxData;
357 ctxData = (context_data*)calloc(1, sizeof(*ctxData));
358 ctxData->renderBuffer =
359 malloc(BytesFromDevFmt(Device->FmtType) *
360 Device->NumChan * Device->UpdateSize);
361 ctxData->ctx = context;
363 data->contexts[data->numContexts] = ctxData;
364 data->numContexts++;
365 }
366 #+end_src
368 Here, the slave context is created, and it's data is stored in the
369 device-wide =ExtraData= structure. The =renderBuffer= that is created
370 here is where the rendered sound samples for this slave context will
371 eventually go.
373 ** Context Switching
374 #+name: context-switching
375 #+begin_src C
376 //////////////////// Context Switching
378 /* A device brings along with it two pieces of state
379 * which have to be swapped in and out with each context.
380 */
381 static void swapInContext(ALCdevice *Device, context_data *ctxData){
382 memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
383 memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
384 }
386 static void saveContext(ALCdevice *Device, context_data *ctxData){
387 memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
388 memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
389 }
391 static ALCcontext **currentContext;
392 static ALuint currentNumContext;
394 /* By default, all contexts are rendered at once for each call to aluMixData.
395 * This function uses the internals of the ALCdevice struct to temporally
396 * cause aluMixData to only render the chosen context.
397 */
398 static void limitContext(ALCdevice *Device, ALCcontext *ctx){
399 currentContext = Device->Contexts;
400 currentNumContext = Device->NumContexts;
401 Device->Contexts = &ctx;
402 Device->NumContexts = 1;
403 }
405 static void unLimitContext(ALCdevice *Device){
406 Device->Contexts = currentContext;
407 Device->NumContexts = currentNumContext;
408 }
409 #+end_src
411 =OpenAL= normally renders all contexts in parallel, outputting the
412 whole result to the buffer. It does this by iterating over the
413 Device->Contexts array and rendering each context to the buffer in
414 turn. By temporally setting Device->NumContexts to 1 and adjusting
415 the Device's context list to put the desired context-to-be-rendered
416 into position 0, we can get trick =OpenAL= into rendering each context
417 separate from all the others.
419 ** Main Device Loop
420 #+name: main-loop
421 #+begin_src C
422 //////////////////// Main Device Loop
424 /* Establish the LWJGL context as the master context, which will
425 * be synchronized to all the slave contexts
426 */
427 static void init(ALCdevice *Device){
428 ALCcontext *masterContext = alcGetCurrentContext();
429 addContext(Device, masterContext);
430 }
432 static void renderData(ALCdevice *Device, int samples){
433 if(!Device->Connected){return;}
434 send_data *data = (send_data*)Device->ExtraData;
435 ALCcontext *current = alcGetCurrentContext();
437 ALuint i;
438 for (i = 1; i < data->numContexts; i++){
439 syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx);
440 }
442 if ((ALuint) samples > Device->UpdateSize){
443 printf("exceeding internal buffer size; dropping samples\n");
444 printf("requested %d; available %d\n", samples, Device->UpdateSize);
445 samples = (int) Device->UpdateSize;
446 }
448 for (i = 0; i < data->numContexts; i++){
449 context_data *ctxData = data->contexts[i];
450 ALCcontext *ctx = ctxData->ctx;
451 alcMakeContextCurrent(ctx);
452 limitContext(Device, ctx);
453 swapInContext(Device, ctxData);
454 aluMixData(Device, ctxData->renderBuffer, samples);
455 saveContext(Device, ctxData);
456 unLimitContext(Device);
457 }
458 alcMakeContextCurrent(current);
459 }
460 #+end_src
462 The main loop synchronizes the master LWJGL context with all the slave
463 contexts, then iterates through each context, rendering just that
464 context to it's audio-sample storage buffer.
466 ** JNI Methods
468 At this point, we have the ability to create multiple listeners by
469 using the master/slave context trick, and the rendered audio data is
470 waiting patiently in internal buffers, one for each listener. We need
471 a way to transport this information to Java, and also a way to drive
472 this device from Java. The following JNI interface code is inspired
473 by the LWJGL JNI interface to =OpenAL=.
475 *** Stepping the Device
476 #+name: jni-step
477 #+begin_src C
478 //////////////////// JNI Methods
480 #include "com_aurellem_send_AudioSend.h"
482 /*
483 * Class: com_aurellem_send_AudioSend
484 * Method: nstep
485 * Signature: (JI)V
486 */
487 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep
488 (JNIEnv *env, jclass clazz, jlong device, jint samples){
489 UNUSED(env);UNUSED(clazz);UNUSED(device);
490 renderData((ALCdevice*)((intptr_t)device), samples);
491 }
492 #+end_src
493 This device, unlike most of the other devices in =OpenAL=, does not
494 render sound unless asked. This enables the system to slow down or
495 speed up depending on the needs of the AIs who are using it to
496 listen. If the device tried to render samples in real-time, a
497 complicated AI whose mind takes 100 seconds of computer time to
498 simulate 1 second of AI-time would miss almost all of the sound in
499 its environment.
502 *** Device->Java Data Transport
503 #+name: jni-get-samples
504 #+begin_src C
505 /*
506 * Class: com_aurellem_send_AudioSend
507 * Method: ngetSamples
508 * Signature: (JLjava/nio/ByteBuffer;III)V
509 */
510 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples
511 (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position,
512 jint samples, jint n){
513 UNUSED(clazz);
515 ALvoid *buffer_address =
516 ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position));
517 ALCdevice *recorder = (ALCdevice*) ((intptr_t)device);
518 send_data *data = (send_data*)recorder->ExtraData;
519 if ((ALuint)n > data->numContexts){return;}
520 memcpy(buffer_address, data->contexts[n]->renderBuffer,
521 BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples);
522 }
523 #+end_src
525 This is the transport layer between C and Java that will eventually
526 allow us to access rendered sound data from clojure.
528 *** Listener Management
530 =addListener=, =setNthListenerf=, and =setNthListener3f= are
531 necessary to change the properties of any listener other than the
532 master one, since only the listener of the current active context is
533 affected by the normal =OpenAL= listener calls.
534 #+name: listener-manage
535 #+begin_src C
536 /*
537 * Class: com_aurellem_send_AudioSend
538 * Method: naddListener
539 * Signature: (J)V
540 */
541 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener
542 (JNIEnv *env, jclass clazz, jlong device){
543 UNUSED(env); UNUSED(clazz);
544 //printf("creating new context via naddListener\n");
545 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
546 ALCcontext *new = alcCreateContext(Device, NULL);
547 addContext(Device, new);
548 }
550 /*
551 * Class: com_aurellem_send_AudioSend
552 * Method: nsetNthListener3f
553 * Signature: (IFFFJI)V
554 */
555 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f
556 (JNIEnv *env, jclass clazz, jint param,
557 jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){
558 UNUSED(env);UNUSED(clazz);
560 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
561 send_data *data = (send_data*)Device->ExtraData;
563 ALCcontext *current = alcGetCurrentContext();
564 if ((ALuint)contextNum > data->numContexts){return;}
565 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
566 alListener3f(param, v1, v2, v3);
567 alcMakeContextCurrent(current);
568 }
570 /*
571 * Class: com_aurellem_send_AudioSend
572 * Method: nsetNthListenerf
573 * Signature: (IFJI)V
574 */
575 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf
576 (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device,
577 jint contextNum){
579 UNUSED(env);UNUSED(clazz);
581 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
582 send_data *data = (send_data*)Device->ExtraData;
584 ALCcontext *current = alcGetCurrentContext();
585 if ((ALuint)contextNum > data->numContexts){return;}
586 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
587 alListenerf(param, v1);
588 alcMakeContextCurrent(current);
589 }
590 #+end_src
592 *** Initialization
593 =initDevice= is called from the Java side after LWJGL has created its
594 context, and before any calls to =addListener=. It establishes the
595 LWJGL context as the master context.
597 =getAudioFormat= is a convenience function that uses JNI to build up a
598 =javax.sound.sampled.AudioFormat= object from data in the Device. This
599 way, there is no ambiguity about what the bits created by =step= and
600 returned by =getSamples= mean.
601 #+name: jni-init
602 #+begin_src C
603 /*
604 * Class: com_aurellem_send_AudioSend
605 * Method: ninitDevice
606 * Signature: (J)V
607 */
608 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice
609 (JNIEnv *env, jclass clazz, jlong device){
610 UNUSED(env);UNUSED(clazz);
611 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
612 init(Device);
613 }
615 /*
616 * Class: com_aurellem_send_AudioSend
617 * Method: ngetAudioFormat
618 * Signature: (J)Ljavax/sound/sampled/AudioFormat;
619 */
620 JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat
621 (JNIEnv *env, jclass clazz, jlong device){
622 UNUSED(clazz);
623 jclass AudioFormatClass =
624 (*env)->FindClass(env, "javax/sound/sampled/AudioFormat");
625 jmethodID AudioFormatConstructor =
626 (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V");
628 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
629 int isSigned;
630 switch (Device->FmtType)
631 {
632 case DevFmtUByte:
633 case DevFmtUShort: isSigned = 0; break;
634 default : isSigned = 1;
635 }
636 float frequency = Device->Frequency;
637 int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType));
638 int channels = Device->NumChan;
639 jobject format = (*env)->
640 NewObject(
641 env,AudioFormatClass,AudioFormatConstructor,
642 frequency,
643 bitsPerFrame,
644 channels,
645 isSigned,
646 0);
647 return format;
648 }
649 #+end_src
651 ** Boring Device Management Stuff / Memory Cleanup
652 This code is more-or-less copied verbatim from the other =OpenAL=
653 Devices. It's the basis for =OpenAL='s primitive object system.
654 #+name: device-init
655 #+begin_src C
656 //////////////////// Device Initialization / Management
658 static const ALCchar sendDevice[] = "Multiple Audio Send";
660 static ALCboolean send_open_playback(ALCdevice *device,
661 const ALCchar *deviceName)
662 {
663 send_data *data;
664 // stop any buffering for stdout, so that I can
665 // see the printf statements in my terminal immediately
666 setbuf(stdout, NULL);
668 if(!deviceName)
669 deviceName = sendDevice;
670 else if(strcmp(deviceName, sendDevice) != 0)
671 return ALC_FALSE;
672 data = (send_data*)calloc(1, sizeof(*data));
673 device->szDeviceName = strdup(deviceName);
674 device->ExtraData = data;
675 return ALC_TRUE;
676 }
678 static void send_close_playback(ALCdevice *device)
679 {
680 send_data *data = (send_data*)device->ExtraData;
681 alcMakeContextCurrent(NULL);
682 ALuint i;
683 // Destroy all slave contexts. LWJGL will take care of
684 // its own context.
685 for (i = 1; i < data->numContexts; i++){
686 context_data *ctxData = data->contexts[i];
687 alcDestroyContext(ctxData->ctx);
688 free(ctxData->renderBuffer);
689 free(ctxData);
690 }
691 free(data);
692 device->ExtraData = NULL;
693 }
695 static ALCboolean send_reset_playback(ALCdevice *device)
696 {
697 SetDefaultWFXChannelOrder(device);
698 return ALC_TRUE;
699 }
701 static void send_stop_playback(ALCdevice *Device){
702 UNUSED(Device);
703 }
705 static const BackendFuncs send_funcs = {
706 send_open_playback,
707 send_close_playback,
708 send_reset_playback,
709 send_stop_playback,
710 NULL,
711 NULL, /* These would be filled with functions to */
712 NULL, /* handle capturing audio if we we into that */
713 NULL, /* sort of thing... */
714 NULL,
715 NULL
716 };
718 ALCboolean alc_send_init(BackendFuncs *func_list){
719 *func_list = send_funcs;
720 return ALC_TRUE;
721 }
723 void alc_send_deinit(void){}
725 void alc_send_probe(enum DevProbe type)
726 {
727 switch(type)
728 {
729 case DEVICE_PROBE:
730 AppendDeviceList(sendDevice);
731 break;
732 case ALL_DEVICE_PROBE:
733 AppendAllDeviceList(sendDevice);
734 break;
735 case CAPTURE_DEVICE_PROBE:
736 break;
737 }
738 }
739 #+end_src
741 * The Java interface, =AudioSend=
743 The Java interface to the Send Device follows naturally from the JNI
744 definitions. The only thing here of note is the =deviceID=. This is
745 available from LWJGL, but to only way to get it is with reflection.
746 Unfortunately, there is no other way to control the Send device than
747 to obtain a pointer to it.
749 #+include: "../../audio-send/java/src/com/aurellem/send/AudioSend.java" src java
751 * The Java Audio Renderer, =AudioSendRenderer=
753 #+include: "../../jmeCapture/src/com/aurellem/capture/audio/AudioSendRenderer.java" src java
755 The =AudioSendRenderer= is a modified version of the
756 =LwjglAudioRenderer= which implements the =MultiListener= interface to
757 provide access and creation of more than one =Listener= object.
759 ** MultiListener.java
761 #+include: "../../jmeCapture/src/com/aurellem/capture/audio/MultiListener.java" src java
763 ** SoundProcessors are like SceneProcessors
765 A =SoundProcessor= is analogous to a =SceneProcessor=. Every frame, the
766 =SoundProcessor= registered with a given =Listener= receives the
767 rendered sound data and can do whatever processing it wants with it.
769 #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java
771 * Finally, Ears in clojure!
773 Now that the =C= and =Java= infrastructure is complete, the clojure
774 hearing abstraction is simple and closely parallels the [[./vision.org][vision]]
775 abstraction.
777 ** Hearing Pipeline
779 All sound rendering is done in the CPU, so =hearing-pipeline= is
780 much less complicated than =vision-pipeline= The bytes available in
781 the ByteBuffer obtained from the =send= Device have different meanings
782 dependent upon the particular hardware or your system. That is why
783 the =AudioFormat= object is necessary to provide the meaning that the
784 raw bytes lack. =byteBuffer->pulse-vector= uses the excellent
785 conversion facilities from [[http://www.tritonus.org/ ][tritonus]] ([[http://tritonus.sourceforge.net/apidoc/org/tritonus/share/sampled/FloatSampleTools.html#byte2floatInterleaved%2528byte%5B%5D,%2520int,%2520float%5B%5D,%2520int,%2520int,%2520javax.sound.sampled.AudioFormat%2529][javadoc]]) to generate a clojure vector of
786 floats which represent the linear PCM encoded waveform of the
787 sound. With linear PCM (pulse code modulation) -1.0 represents maximum
788 rarefaction of the air while 1.0 represents maximum compression of the
789 air at a given instant.
791 #+name: hearing-pipeline
792 #+begin_src clojure
793 (in-ns 'cortex.hearing)
795 (defn hearing-pipeline
796 "Creates a SoundProcessor which wraps a sound processing
797 continuation function. The continuation is a function that takes
798 [#^ByteBuffer b #^Integer int numSamples #^AudioFormat af ], each of which
799 has already been appropriately sized."
800 [continuation]
801 (proxy [SoundProcessor] []
802 (cleanup [])
803 (process
804 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
805 (continuation audioSamples numSamples audioFormat))))
807 (defn byteBuffer->pulse-vector
808 "Extract the sound samples from the byteBuffer as a PCM encoded
809 waveform with values ranging from -1.0 to 1.0 into a vector of
810 floats."
811 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
812 (let [num-floats (/ numSamples (.getFrameSize audioFormat))
813 bytes (byte-array numSamples)
814 floats (float-array num-floats)]
815 (.get audioSamples bytes 0 numSamples)
816 (FloatSampleTools/byte2floatInterleaved
817 bytes 0 floats 0 num-floats audioFormat)
818 (vec floats)))
819 #+end_src
821 ** Physical Ears
823 Together, these three functions define how ears found in a specially
824 prepared blender file will be translated to =Listener= objects in a
825 simulation. =ears= extracts all the children of to top level node
826 named "ears". =add-ear!= and =update-listener-velocity!= use
827 =bind-sense= to bind a =Listener= object located at the initial
828 position of an "ear" node to the closest physical object in the
829 creature. That =Listener= will stay in the same orientation to the
830 object with which it is bound, just as the camera in the [[http://aurellem.localhost/cortex/html/sense.html#sec-4-1][sense binding
831 demonstration]]. =OpenAL= simulates the Doppler effect for moving
832 listeners, =update-listener-velocity!= ensures that this velocity
833 information is always up-to-date.
835 #+name: hearing-ears
836 #+begin_src clojure
837 (def
838 ^{:doc "Return the children of the creature's \"ears\" node."
839 :arglists '([creature])}
840 ears
841 (sense-nodes "ears"))
844 (defn update-listener-velocity!
845 "Update the listener's velocity every update loop."
846 [#^Spatial obj #^Listener lis]
847 (let [old-position (atom (.getLocation lis))]
848 (.addControl
849 obj
850 (proxy [AbstractControl] []
851 (controlUpdate [tpf]
852 (let [new-position (.getLocation lis)]
853 (.setVelocity
854 lis
855 (.mult (.subtract new-position @old-position)
856 (float (/ tpf))))
857 (reset! old-position new-position)))
858 (controlRender [_ _])))))
860 (defn add-ear!
861 "Create a Listener centered on the current position of 'ear
862 which follows the closest physical node in 'creature and
863 sends sound data to 'continuation."
864 [#^Application world #^Node creature #^Spatial ear continuation]
865 (let [target (closest-node creature ear)
866 lis (Listener.)
867 audio-renderer (.getAudioRenderer world)
868 sp (hearing-pipeline continuation)]
869 (.setLocation lis (.getWorldTranslation ear))
870 (.setRotation lis (.getWorldRotation ear))
871 (bind-sense target lis)
872 (update-listener-velocity! target lis)
873 (.addListener audio-renderer lis)
874 (.registerSoundProcessor audio-renderer lis sp)))
875 #+end_src
877 ** Ear Creation
879 #+name: hearing-kernel
880 #+begin_src clojure
881 (defn hearing-kernel
882 "Returns a function which returns auditory sensory data when called
883 inside a running simulation."
884 [#^Node creature #^Spatial ear]
885 (let [hearing-data (atom [])
886 register-listener!
887 (runonce
888 (fn [#^Application world]
889 (add-ear!
890 world creature ear
891 (comp #(reset! hearing-data %)
892 byteBuffer->pulse-vector))))]
893 (fn [#^Application world]
894 (register-listener! world)
895 (let [data @hearing-data
896 topology
897 (vec (map #(vector % 0) (range 0 (count data))))]
898 [topology data]))))
900 (defn hearing!
901 "Endow the creature in a particular world with the sense of
902 hearing. Will return a sequence of functions, one for each ear,
903 which when called will return the auditory data from that ear."
904 [#^Node creature]
905 (for [ear (ears creature)]
906 (hearing-kernel creature ear)))
907 #+end_src
909 Each function returned by =hearing-kernel!= will register a new
910 =Listener= with the simulation the first time it is called. Each time
911 it is called, the hearing-function will return a vector of linear PCM
912 encoded sound data that was heard since the last frame. The size of
913 this vector is of course determined by the overall framerate of the
914 game. With a constant framerate of 60 frames per second and a sampling
915 frequency of 44,100 samples per second, the vector will have exactly
916 735 elements.
918 ** Visualizing Hearing
920 This is a simple visualization function which displays the waveform
921 reported by the simulated sense of hearing. It converts the values
922 reported in the vector returned by the hearing function from the range
923 [-1.0, 1.0] to the range [0 255], converts to integer, and displays
924 the number as a greyscale pixel.
926 #+name: hearing-display
927 #+begin_src clojure
928 (in-ns 'cortex.hearing)
930 (defn view-hearing
931 "Creates a function which accepts a list of auditory data and
932 display each element of the list to the screen as an image."
933 []
934 (view-sense
935 (fn [[coords sensor-data]]
936 (let [pixel-data
937 (vec
938 (map
939 #(rem (int (* 255 (/ (+ 1 %) 2))) 256)
940 sensor-data))
941 height 50
942 image (BufferedImage. (max 1 (count coords)) height
943 BufferedImage/TYPE_INT_RGB)]
944 (dorun
945 (for [x (range (count coords))]
946 (dorun
947 (for [y (range height)]
948 (let [raw-sensor (pixel-data x)]
949 (.setRGB image x y (gray raw-sensor)))))))
950 image))))
951 #+end_src
953 * Testing Hearing
954 ** Advanced Java Example
956 I wrote a test case in Java that demonstrates the use of the Java
957 components of this hearing system. It is part of a larger java library
958 to capture perfect Audio from jMonkeyEngine. Some of the clojure
959 constructs above are partially reiterated in the java source file. But
960 first, the video! As far as I know this is the first instance of
961 multiple simulated listeners in a virtual environment using OpenAL.
963 #+begin_html
964 <div class="figure">
965 <center>
966 <video controls="controls" width="500">
967 <source src="../video/java-hearing-test.ogg" type="video/ogg"
968 preload="none" poster="../images/aurellem-1280x480.png" />
969 </video>
970 <br> <a href="http://www.youtube.com/watch?v=oCEfK0yhDrY"> YouTube </a>
971 </center>
972 <p>The blue sphere is emitting a constant sound. Each gray box is
973 listening for sound, and will change color from gray to green if it
974 detects sound which is louder than a certain threshold. As the blue
975 sphere travels along the path, it excites each of the cubes in turn.</p>
976 </div>
977 #+end_html
979 #+include: "../../jmeCapture/src/com/aurellem/capture/examples/Advanced.java" src java
981 Here is a small clojure program to drive the java program and make it
982 available as part of my test suite.
984 #+name: test-hearing-1
985 #+begin_src clojure
986 (in-ns 'cortex.test.hearing)
988 (defn test-java-hearing
989 "Testing hearing:
990 You should see a blue sphere flying around several
991 cubes. As the sphere approaches each cube, it turns
992 green."
993 []
994 (doto (com.aurellem.capture.examples.Advanced.)
995 (.setSettings
996 (doto (AppSettings. true)
997 (.setAudioRenderer "Send")))
998 (.setShowSettings false)
999 (.setPauseOnLostFocus false)))
1000 #+end_src
1002 ** Adding Hearing to the Worm
1004 To the worm, I add a new node called "ears" with one child which
1005 represents the worm's single ear.
1007 #+attr_html: width=755
1008 #+caption: The Worm with a newly added nodes describing an ear.
1009 [[../images/worm-with-ear.png]]
1011 The node highlighted in yellow it the top-level "ears" node. It's
1012 child, highlighted in orange, represents a the single ear the creature
1013 has. The ear will be localized right above the curved part of the
1014 worm's lower hemispherical region opposite the eye.
1016 The other empty nodes represent the worm's single joint and eye and are
1017 described in [[./body.org][body]] and [[./vision.org][vision]].
1019 #+name: test-hearing-2
1020 #+begin_src clojure
1021 (in-ns 'cortex.test.hearing)
1023 (defn test-worm-hearing
1024 "Testing hearing:
1025 You will see the worm fall onto a table. There is a long
1026 horizontal bar which shows the waveform of whatever the worm is
1027 hearing. When you play a sound, the bar should display a waveform.
1029 Keys:
1030 <enter> : play sound
1031 l : play hymn"
1032 ([] (test-worm-hearing false))
1033 ([record?]
1034 (let [the-worm (doto (worm) (body!))
1035 hearing (hearing! the-worm)
1036 hearing-display (view-hearing)
1038 tone (AudioNode. (asset-manager)
1039 "Sounds/pure.wav" false)
1041 hymn (AudioNode. (asset-manager)
1042 "Sounds/ear-and-eye.wav" false)]
1043 (world
1044 (nodify [the-worm (floor)])
1045 (merge standard-debug-controls
1046 {"key-return"
1047 (fn [_ value]
1048 (if value (.play tone)))
1049 "key-l"
1050 (fn [_ value]
1051 (if value (.play hymn)))})
1052 (fn [world]
1053 (light-up-everything world)
1054 (let [timer (IsoTimer. 60)]
1055 (.setTimer world timer)
1056 (display-dilated-time world timer))
1057 (if record?
1058 (do
1059 (com.aurellem.capture.Capture/captureVideo
1060 world
1061 (File. "/home/r/proj/cortex/render/worm-audio/frames"))
1062 (com.aurellem.capture.Capture/captureAudio
1063 world
1064 (File. "/home/r/proj/cortex/render/worm-audio/audio.wav")))))
1066 (fn [world tpf]
1067 (hearing-display
1068 (map #(% world) hearing)
1069 (if record?
1070 (File. "/home/r/proj/cortex/render/worm-audio/hearing-data"))))))))
1071 #+end_src
1073 #+results: test-hearing-2
1074 : #'cortex.test.hearing/test-worm-hearing
1076 In this test, I load the worm with its newly formed ear and let it
1077 hear sounds. The sound the worm is hearing is localized to the origin
1078 of the world, and you can see that as the worm moves farther away from
1079 the origin when it is hit by balls, it hears the sound less intensely.
1081 The sound you hear in the video is from the worm's perspective. Notice
1082 how the pure tone becomes fainter and the visual display of the
1083 auditory data becomes less pronounced as the worm falls farther away
1084 from the source of the sound.
1086 #+begin_html
1087 <div class="figure">
1088 <center>
1089 <video controls="controls" width="600">
1090 <source src="../video/worm-hearing.ogg" type="video/ogg"
1091 preload="none" poster="../images/aurellem-1280x480.png" />
1092 </video>
1093 <br> <a href="http://youtu.be/KLUtV1TNksI"> YouTube </a>
1094 </center>
1095 <p>The worm can now hear the sound pulses produced from the
1096 hymn. Notice the strikingly different pattern that human speech
1097 makes compared to the instruments. Once the worm is pushed off the
1098 floor, the sound it hears is attenuated, and the display of the
1099 sound it hears becomes fainter. This shows the 3D localization of
1100 sound in this world.</p>
1101 </div>
1102 #+end_html
1104 *** Creating the Ear Video
1105 #+name: magick-3
1106 #+begin_src clojure
1107 (ns cortex.video.magick3
1108 (:import java.io.File)
1109 (:use clojure.java.shell))
1111 (defn images [path]
1112 (sort (rest (file-seq (File. path)))))
1114 (def base "/home/r/proj/cortex/render/worm-audio/")
1116 (defn pics [file]
1117 (images (str base file)))
1119 (defn combine-images []
1120 (let [main-view (pics "frames")
1121 hearing (pics "hearing-data")
1122 background (repeat 9001 (File. (str base "background.png")))
1123 targets (map
1124 #(File. (str base "out/" (format "%07d.png" %)))
1125 (range 0 (count main-view)))]
1126 (dorun
1127 (pmap
1128 (comp
1129 (fn [[background main-view hearing target]]
1130 (println target)
1131 (sh "convert"
1132 background
1133 main-view "-geometry" "+66+21" "-composite"
1134 hearing "-geometry" "+21+526" "-composite"
1135 target))
1136 (fn [& args] (map #(.getCanonicalPath %) args)))
1137 background main-view hearing targets))))
1138 #+end_src
1140 #+begin_src sh :results silent
1141 cd /home/r/proj/cortex/render/worm-audio
1142 ffmpeg -r 60 -i out/%07d.png -i audio.wav \
1143 -b:a 128k -b:v 9001k \
1144 -c:a libvorbis -c:v -g 60 libtheora worm-hearing.ogg
1145 #+end_src
1147 * Headers
1149 #+name: hearing-header
1150 #+begin_src clojure
1151 (ns cortex.hearing
1152 "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple
1153 listeners at different positions in the same world. Automatically
1154 reads ear-nodes from specially prepared blender files and
1155 instantiates them in the world as simulated ears."
1156 {:author "Robert McIntyre"}
1157 (:use (cortex world util sense))
1158 (:import java.nio.ByteBuffer)
1159 (:import java.awt.image.BufferedImage)
1160 (:import org.tritonus.share.sampled.FloatSampleTools)
1161 (:import (com.aurellem.capture.audio
1162 SoundProcessor AudioSendRenderer))
1163 (:import javax.sound.sampled.AudioFormat)
1164 (:import (com.jme3.scene Spatial Node))
1165 (:import com.jme3.audio.Listener)
1166 (:import com.jme3.app.Application)
1167 (:import com.jme3.scene.control.AbstractControl))
1168 #+end_src
1170 #+name: test-header
1171 #+begin_src clojure
1172 (ns cortex.test.hearing
1173 (:use (cortex world util hearing body))
1174 (:use cortex.test.body)
1175 (:import (com.jme3.audio AudioNode Listener))
1176 (:import java.io.File)
1177 (:import com.jme3.scene.Node
1178 com.jme3.system.AppSettings
1179 com.jme3.math.Vector3f)
1180 (:import (com.aurellem.capture Capture IsoTimer RatchetTimer)))
1181 #+end_src
1183 #+results: test-header
1184 : com.aurellem.capture.RatchetTimer
1186 * Source Listing
1187 - [[../src/cortex/hearing.clj][cortex.hearing]]
1188 - [[../src/cortex/test/hearing.clj][cortex.test.hearing]]
1189 #+html: <ul> <li> <a href="../org/hearing.org">This org file</a> </li> </ul>
1190 - [[http://hg.bortreb.com ][source-repository]]
1192 * Next
1193 The worm can see and hear, but it can't feel the world or
1194 itself. Next post, I'll give the worm a [[./touch.org][sense of touch]].
1198 * COMMENT Code Generation
1200 #+begin_src clojure :tangle ../src/cortex/hearing.clj
1201 <<hearing-header>>
1202 <<hearing-pipeline>>
1203 <<hearing-ears>>
1204 <<hearing-kernel>>
1205 <<hearing-display>>
1206 #+end_src
1208 #+begin_src clojure :tangle ../src/cortex/test/hearing.clj
1209 <<test-header>>
1210 <<test-hearing-1>>
1211 <<test-hearing-2>>
1212 #+end_src
1214 #+begin_src clojure :tangle ../src/cortex/video/magick3.clj
1215 <<magick-3>>
1216 #+end_src
1218 #+begin_src C :tangle ../../audio-send/Alc/backends/send.c
1219 <<send-header>>
1220 <<send-state>>
1221 <<sync-macros>>
1222 <<sync-sources>>
1223 <<sync-contexts>>
1224 <<context-creation>>
1225 <<context-switching>>
1226 <<main-loop>>
1227 <<jni-step>>
1228 <<jni-get-samples>>
1229 <<listener-manage>>
1230 <<jni-init>>
1231 <<device-init>>
1232 #+end_src