view org/ear.org @ 20:e8ae40c9848c

fixed 1,000,000 spelling errors
author Robert McIntyre <rlm@mit.edu>
date Thu, 03 Nov 2011 15:02:18 -0700
parents 22ac5a0367cd
children 0ee04505a37f
line wrap: on
line source
1 #+title: Simulated Sense of Hearing
2 #+author: Robert McIntyre
3 #+email: rlm@mit.edu
4 #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3
5 #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI
6 #+SETUPFILE: ../../aurellem/org/setup.org
7 #+INCLUDE: ../../aurellem/org/level-0.org
8 #+BABEL: :exports both :noweb yes :cache no :mkdirp yes
10 * Hearing
12 I want to be able to place ears in a similar manner to how I place
13 the eyes. I want to be able to place ears in a unique spatial
14 position, and receive as output at every tick the F.F.T. of whatever
15 signals are happening at that point.
17 Hearing is one of the more difficult senses to simulate, because there
18 is less support for obtaining the actual sound data that is processed
19 by jMonkeyEngine3.
21 jMonkeyEngine's sound system works as follows:
23 - jMonkeyEngine uses the =AppSettings= for the particular application
24 to determine what sort of =AudioRenderer= should be used.
25 - although some support is provided for multiple AudioRendering
26 backends, jMonkeyEngine at the time of this writing will either
27 pick no AudioRenderer at all, or the =LwjglAudioRenderer=
28 - jMonkeyEngine tries to figure out what sort of system you're
29 running and extracts the appropriate native libraries.
30 - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
31 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
32 - =OpenAL= calculates the 3D sound localization and feeds a stream of
33 sound to any of various sound output devices with which it knows
34 how to communicate.
36 A consequence of this is that there's no way to access the actual
37 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
38 one /listener/, which normally isn't a problem for games, but becomes
39 a problem when trying to make multiple AI creatures that can each hear
40 the world from a different perspective.
42 To make many AI creatures in jMonkeyEngine that can each hear the
43 world from their own perspective, it is necessary to go all the way
44 back to =OpenAL= and implement support for simulated hearing there.
46 * Extending =OpenAL=
47 ** =OpenAL= Devices
49 =OpenAL= goes to great lengths to support many different systems, all
50 with different sound capabilities and interfaces. It accomplishes this
51 difficult task by providing code for many different sound backends in
52 pseudo-objects called /Devices/. There's a device for the Linux Open
53 Sound System and the Advanced Linux Sound Architecture, there's one
54 for Direct Sound on Windows, there's even one for Solaris. =OpenAL=
55 solves the problem of platform independence by providing all these
56 Devices.
58 Wrapper libraries such as LWJGL are free to examine the system on
59 which they are running and then select an appropriate device for that
60 system.
62 There are also a few "special" devices that don't interface with any
63 particular system. These include the Null Device, which doesn't do
64 anything, and the Wave Device, which writes whatever sound it receives
65 to a file, if everything has been set up correctly when configuring
66 =OpenAL=.
68 Actual mixing of the sound data happens in the Devices, and they are
69 the only point in the sound rendering process where this data is
70 available.
72 Therefore, in order to support multiple listeners, and get the sound
73 data in a form that the AIs can use, it is necessary to create a new
74 Device, which supports this features.
76 ** The Send Device
77 Adding a device to OpenAL is rather tricky -- there are five separate
78 files in the =OpenAL= source tree that must be modified to do so. I've
79 documented this process [[./add-new-device.org][here]] for anyone who is interested.
82 Onward to that actual Device!
84 again, my objectives are:
86 - Support Multiple Listeners from jMonkeyEngine3
87 - Get access to the rendered sound data for further processing from
88 clojure.
90 ** =send.c=
92 ** Header
93 #+srcname: send-header
94 #+begin_src C
95 #include "config.h"
96 #include <stdlib.h>
97 #include "alMain.h"
98 #include "AL/al.h"
99 #include "AL/alc.h"
100 #include "alSource.h"
101 #include <jni.h>
103 //////////////////// Summary
105 struct send_data;
106 struct context_data;
108 static void addContext(ALCdevice *, ALCcontext *);
109 static void syncContexts(ALCcontext *master, ALCcontext *slave);
110 static void syncSources(ALsource *master, ALsource *slave,
111 ALCcontext *masterCtx, ALCcontext *slaveCtx);
113 static void syncSourcei(ALuint master, ALuint slave,
114 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
115 static void syncSourcef(ALuint master, ALuint slave,
116 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
117 static void syncSource3f(ALuint master, ALuint slave,
118 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
120 static void swapInContext(ALCdevice *, struct context_data *);
121 static void saveContext(ALCdevice *, struct context_data *);
122 static void limitContext(ALCdevice *, ALCcontext *);
123 static void unLimitContext(ALCdevice *);
125 static void init(ALCdevice *);
126 static void renderData(ALCdevice *, int samples);
128 #define UNUSED(x) (void)(x)
129 #+end_src
131 The main idea behind the Send device is to take advantage of the fact
132 that LWJGL only manages one /context/ when using OpenAL. A /context/
133 is like a container that holds samples and keeps track of where the
134 listener is. In order to support multiple listeners, the Send device
135 identifies the LWJGL context as the master context, and creates any
136 number of slave contexts to represent additional listeners. Every
137 time the device renders sound, it synchronizes every source from the
138 master LWJGL context to the slave contexts. Then, it renders each
139 context separately, using a different listener for each one. The
140 rendered sound is made available via JNI to jMonkeyEngine.
142 To recap, the process is:
143 - Set the LWJGL context as "master" in the =init()= method.
144 - Create any number of additional contexts via =addContext()=
145 - At every call to =renderData()= sync the master context with the
146 slave contexts with =syncContexts()=
147 - =syncContexts()= calls =syncSources()= to sync all the sources
148 which are in the master context.
149 - =limitContext()= and =unLimitContext()= make it possible to render
150 only one context at a time.
152 ** Necessary State
153 #+begin_src C
154 //////////////////// State
156 typedef struct context_data {
157 ALfloat ClickRemoval[MAXCHANNELS];
158 ALfloat PendingClicks[MAXCHANNELS];
159 ALvoid *renderBuffer;
160 ALCcontext *ctx;
161 } context_data;
163 typedef struct send_data {
164 ALuint size;
165 context_data **contexts;
166 ALuint numContexts;
167 ALuint maxContexts;
168 } send_data;
169 #+end_src
171 Switching between contexts is not the normal operation of a Device,
172 and one of the problems with doing so is that a Device normally keeps
173 around a few pieces of state such as the =ClickRemoval= array above
174 which will become corrupted if the contexts are not done in
175 parallel. The solution is to create a copy of this normally global
176 device state for each context, and copy it back and forth into and out
177 of the actual device state whenever a context is rendered.
179 ** Synchronization Macros
181 #+begin_src C
182 //////////////////// Context Creation / Synchronization
184 #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \
185 void NAME (ALuint sourceID1, ALuint sourceID2, \
186 ALCcontext *ctx1, ALCcontext *ctx2, \
187 ALenum param){ \
188 INIT_EXPR; \
189 ALCcontext *current = alcGetCurrentContext(); \
190 alcMakeContextCurrent(ctx1); \
191 GET_EXPR; \
192 alcMakeContextCurrent(ctx2); \
193 SET_EXPR; \
194 alcMakeContextCurrent(current); \
195 }
197 #define MAKE_SYNC(NAME, TYPE, GET, SET) \
198 _MAKE_SYNC(NAME, \
199 TYPE value, \
200 GET(sourceID1, param, &value), \
201 SET(sourceID2, param, value))
203 #define MAKE_SYNC3(NAME, TYPE, GET, SET) \
204 _MAKE_SYNC(NAME, \
205 TYPE value1; TYPE value2; TYPE value3;, \
206 GET(sourceID1, param, &value1, &value2, &value3), \
207 SET(sourceID2, param, value1, value2, value3))
209 MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei);
210 MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef);
211 MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i);
212 MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f);
214 #+end_src
216 Setting the state of an =OpenAL= source is done with the =alSourcei=,
217 =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to
218 completely synchronize two sources, it is necessary to use all of
219 them. These macros help to condense the otherwise repetitive
220 synchronization code involving these similar low-level =OpenAL= functions.
222 ** Source Synchronization
223 #+begin_src C
224 void syncSources(ALsource *masterSource, ALsource *slaveSource,
225 ALCcontext *masterCtx, ALCcontext *slaveCtx){
226 ALuint master = masterSource->source;
227 ALuint slave = slaveSource->source;
228 ALCcontext *current = alcGetCurrentContext();
230 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
231 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
232 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
233 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
234 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
235 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
236 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
237 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
238 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
239 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
240 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
241 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
242 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
244 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
245 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
246 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
248 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
249 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
251 alcMakeContextCurrent(masterCtx);
252 ALint source_type;
253 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
255 // Only static sources are currently synchronized!
256 if (AL_STATIC == source_type){
257 ALint master_buffer;
258 ALint slave_buffer;
259 alGetSourcei(master, AL_BUFFER, &master_buffer);
260 alcMakeContextCurrent(slaveCtx);
261 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
262 if (master_buffer != slave_buffer){
263 alSourcei(slave, AL_BUFFER, master_buffer);
264 }
265 }
267 // Synchronize the state of the two sources.
268 alcMakeContextCurrent(masterCtx);
269 ALint masterState;
270 ALint slaveState;
272 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
273 alcMakeContextCurrent(slaveCtx);
274 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
276 if (masterState != slaveState){
277 switch (masterState){
278 case AL_INITIAL : alSourceRewind(slave); break;
279 case AL_PLAYING : alSourcePlay(slave); break;
280 case AL_PAUSED : alSourcePause(slave); break;
281 case AL_STOPPED : alSourceStop(slave); break;
282 }
283 }
284 // Restore whatever context was previously active.
285 alcMakeContextCurrent(current);
286 }
287 #+end_src
288 This function is long because it has to exhaustively go through all the
289 possible state that a source can have and make sure that it is the
290 same between the master and slave sources. I'd like to take this
291 moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very
292 good description of =OpenAL='s internals.
294 ** Context Synchronization
295 #+begin_src C
296 void syncContexts(ALCcontext *master, ALCcontext *slave){
297 /* If there aren't sufficient sources in slave to mirror
298 the sources in master, create them. */
299 ALCcontext *current = alcGetCurrentContext();
301 UIntMap *masterSourceMap = &(master->SourceMap);
302 UIntMap *slaveSourceMap = &(slave->SourceMap);
303 ALuint numMasterSources = masterSourceMap->size;
304 ALuint numSlaveSources = slaveSourceMap->size;
306 alcMakeContextCurrent(slave);
307 if (numSlaveSources < numMasterSources){
308 ALuint numMissingSources = numMasterSources - numSlaveSources;
309 ALuint newSources[numMissingSources];
310 alGenSources(numMissingSources, newSources);
311 }
313 /* Now, slave is guaranteed to have at least as many sources
314 as master. Sync each source from master to the corresponding
315 source in slave. */
316 int i;
317 for(i = 0; i < masterSourceMap->size; i++){
318 syncSources((ALsource*)masterSourceMap->array[i].value,
319 (ALsource*)slaveSourceMap->array[i].value,
320 master, slave);
321 }
322 alcMakeContextCurrent(current);
323 }
324 #+end_src
326 Most of the hard work in Context Synchronization is done in
327 =syncSources()=. The only thing that =syncContexts()= has to worry
328 about is automatically creating new sources whenever a slave context
329 does not have the same number of sources as the master context.
331 ** Context Creation
332 #+begin_src C
333 static void addContext(ALCdevice *Device, ALCcontext *context){
334 send_data *data = (send_data*)Device->ExtraData;
335 // expand array if necessary
336 if (data->numContexts >= data->maxContexts){
337 ALuint newMaxContexts = data->maxContexts*2 + 1;
338 data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data));
339 data->maxContexts = newMaxContexts;
340 }
341 // create context_data and add it to the main array
342 context_data *ctxData;
343 ctxData = (context_data*)calloc(1, sizeof(*ctxData));
344 ctxData->renderBuffer =
345 malloc(BytesFromDevFmt(Device->FmtType) *
346 Device->NumChan * Device->UpdateSize);
347 ctxData->ctx = context;
349 data->contexts[data->numContexts] = ctxData;
350 data->numContexts++;
351 }
352 #+end_src
354 Here, the slave context is created, and it's data is stored in the
355 device-wide =ExtraData= structure. The =renderBuffer= that is created
356 here is where the rendered sound samples for this slave context will
357 eventually go.
359 ** Context Switching
360 #+begin_src C
361 //////////////////// Context Switching
363 /* A device brings along with it two pieces of state
364 * which have to be swapped in and out with each context.
365 */
366 static void swapInContext(ALCdevice *Device, context_data *ctxData){
367 memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
368 memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
369 }
371 static void saveContext(ALCdevice *Device, context_data *ctxData){
372 memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
373 memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
374 }
376 static ALCcontext **currentContext;
377 static ALuint currentNumContext;
379 /* By default, all contexts are rendered at once for each call to aluMixData.
380 * This function uses the internals of the ALCdevice struct to temporally
381 * cause aluMixData to only render the chosen context.
382 */
383 static void limitContext(ALCdevice *Device, ALCcontext *ctx){
384 currentContext = Device->Contexts;
385 currentNumContext = Device->NumContexts;
386 Device->Contexts = &ctx;
387 Device->NumContexts = 1;
388 }
390 static void unLimitContext(ALCdevice *Device){
391 Device->Contexts = currentContext;
392 Device->NumContexts = currentNumContext;
393 }
394 #+end_src
396 =OpenAL= normally renders all Contexts in parallel, outputting the
397 whole result to the buffer. It does this by iterating over the
398 Device->Contexts array and rendering each context to the buffer in
399 turn. By temporally setting Device->NumContexts to 1 and adjusting
400 the Device's context list to put the desired context-to-be-rendered
401 into position 0, we can get trick =OpenAL= into rendering each slave
402 context separate from all the others.
404 ** Main Device Loop
405 #+begin_src C
406 //////////////////// Main Device Loop
408 /* Establish the LWJGL context as the master context, which will
409 * be synchronized to all the slave contexts
410 */
411 static void init(ALCdevice *Device){
412 ALCcontext *masterContext = alcGetCurrentContext();
413 addContext(Device, masterContext);
414 }
417 static void renderData(ALCdevice *Device, int samples){
418 if(!Device->Connected){return;}
419 send_data *data = (send_data*)Device->ExtraData;
420 ALCcontext *current = alcGetCurrentContext();
422 ALuint i;
423 for (i = 1; i < data->numContexts; i++){
424 syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx);
425 }
427 if ((uint) samples > Device->UpdateSize){
428 printf("exceeding internal buffer size; dropping samples\n");
429 printf("requested %d; available %d\n", samples, Device->UpdateSize);
430 samples = (int) Device->UpdateSize;
431 }
433 for (i = 0; i < data->numContexts; i++){
434 context_data *ctxData = data->contexts[i];
435 ALCcontext *ctx = ctxData->ctx;
436 alcMakeContextCurrent(ctx);
437 limitContext(Device, ctx);
438 swapInContext(Device, ctxData);
439 aluMixData(Device, ctxData->renderBuffer, samples);
440 saveContext(Device, ctxData);
441 unLimitContext(Device);
442 }
443 alcMakeContextCurrent(current);
444 }
445 #+end_src
447 The main loop synchronizes the master LWJGL context with all the slave
448 contexts, then walks each context, rendering just that context to it's
449 audio-sample storage buffer.
451 ** JNI Methods
453 At this point, we have the ability to create multiple listeners by
454 using the master/slave context trick, and the rendered audio data is
455 waiting patiently in internal buffers, one for each listener. We need
456 a way to transport this information to Java, and also a way to drive
457 this device from Java. The following JNI interface code is inspired
458 by the way LWJGL interfaces with =OpenAL=.
460 *** step
462 #+begin_src C
463 //////////////////// JNI Methods
465 #include "com_aurellem_send_AudioSend.h"
467 /*
468 * Class: com_aurellem_send_AudioSend
469 * Method: nstep
470 * Signature: (JI)V
471 */
472 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep
473 (JNIEnv *env, jclass clazz, jlong device, jint samples){
474 UNUSED(env);UNUSED(clazz);UNUSED(device);
475 renderData((ALCdevice*)((intptr_t)device), samples);
476 }
477 #+end_src
478 This device, unlike most of the other devices in =OpenAL=, does not
479 render sound unless asked. This enables the system to slow down or
480 speed up depending on the needs of the AIs who are using it to
481 listen. If the device tried to render samples in real-time, a
482 complicated AI whose mind takes 100 seconds of computer time to
483 simulate 1 second of AI-time would miss almost all of the sound in
484 its environment.
487 *** getSamples
488 #+begin_src C
489 /*
490 * Class: com_aurellem_send_AudioSend
491 * Method: ngetSamples
492 * Signature: (JLjava/nio/ByteBuffer;III)V
493 */
494 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples
495 (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position,
496 jint samples, jint n){
497 UNUSED(clazz);
499 ALvoid *buffer_address =
500 ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position));
501 ALCdevice *recorder = (ALCdevice*) ((intptr_t)device);
502 send_data *data = (send_data*)recorder->ExtraData;
503 if ((ALuint)n > data->numContexts){return;}
504 memcpy(buffer_address, data->contexts[n]->renderBuffer,
505 BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples);
506 }
507 #+end_src
509 This is the transport layer between C and Java that will eventually
510 allow us to access rendered sound data from clojure.
512 *** Listener Management
514 =addListener=, =setNthListenerf=, and =setNthListener3f= are
515 necessary to change the properties of any listener other than the
516 master one, since only the listener of the current active context is
517 affected by the normal =OpenAL= listener calls.
519 #+begin_src C
520 /*
521 * Class: com_aurellem_send_AudioSend
522 * Method: naddListener
523 * Signature: (J)V
524 */
525 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener
526 (JNIEnv *env, jclass clazz, jlong device){
527 UNUSED(env); UNUSED(clazz);
528 //printf("creating new context via naddListener\n");
529 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
530 ALCcontext *new = alcCreateContext(Device, NULL);
531 addContext(Device, new);
532 }
534 /*
535 * Class: com_aurellem_send_AudioSend
536 * Method: nsetNthListener3f
537 * Signature: (IFFFJI)V
538 */
539 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f
540 (JNIEnv *env, jclass clazz, jint param,
541 jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){
542 UNUSED(env);UNUSED(clazz);
544 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
545 send_data *data = (send_data*)Device->ExtraData;
547 ALCcontext *current = alcGetCurrentContext();
548 if ((ALuint)contextNum > data->numContexts){return;}
549 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
550 alListener3f(param, v1, v2, v3);
551 alcMakeContextCurrent(current);
552 }
554 /*
555 * Class: com_aurellem_send_AudioSend
556 * Method: nsetNthListenerf
557 * Signature: (IFJI)V
558 */
559 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf
560 (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device,
561 jint contextNum){
563 UNUSED(env);UNUSED(clazz);
565 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
566 send_data *data = (send_data*)Device->ExtraData;
568 ALCcontext *current = alcGetCurrentContext();
569 if ((ALuint)contextNum > data->numContexts){return;}
570 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
571 alListenerf(param, v1);
572 alcMakeContextCurrent(current);
573 }
574 #+end_src
576 *** Initialization
577 =initDevice= is called from the Java side after LWJGL has created its
578 context, and before any calls to =addListener=. It establishes the
579 LWJGL context as the master context.
581 =getAudioFormat= is a convenience function that uses JNI to build up a
582 =javax.sound.sampled.AudioFormat= object from data in the Device. This
583 way, there is no ambiguity about what the bits created by =step= and
584 returned by =getSamples= mean.
586 #+begin_src C
587 /*
588 * Class: com_aurellem_send_AudioSend
589 * Method: ninitDevice
590 * Signature: (J)V
591 */
592 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice
593 (JNIEnv *env, jclass clazz, jlong device){
594 UNUSED(env);UNUSED(clazz);
595 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
596 init(Device);
597 }
599 /*
600 * Class: com_aurellem_send_AudioSend
601 * Method: ngetAudioFormat
602 * Signature: (J)Ljavax/sound/sampled/AudioFormat;
603 */
604 JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat
605 (JNIEnv *env, jclass clazz, jlong device){
606 UNUSED(clazz);
607 jclass AudioFormatClass =
608 (*env)->FindClass(env, "javax/sound/sampled/AudioFormat");
609 jmethodID AudioFormatConstructor =
610 (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V");
612 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
613 int isSigned;
614 switch (Device->FmtType)
615 {
616 case DevFmtUByte:
617 case DevFmtUShort: isSigned = 0; break;
618 default : isSigned = 1;
619 }
620 float frequency = Device->Frequency;
621 int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType));
622 int channels = Device->NumChan;
623 jobject format = (*env)->
624 NewObject(
625 env,AudioFormatClass,AudioFormatConstructor,
626 frequency,
627 bitsPerFrame,
628 channels,
629 isSigned,
630 0);
631 return format;
632 }
633 #+end_src
635 *** Boring Device management stuff
636 This code is more-or-less copied verbatim from the other =OpenAL=
637 backends. It's the basis for =OpenAL='s primitive object system.
639 #+begin_src C
640 //////////////////// Device Initialization / Management
642 static const ALCchar sendDevice[] = "Multiple Audio Send";
644 static ALCboolean send_open_playback(ALCdevice *device,
645 const ALCchar *deviceName)
646 {
647 send_data *data;
648 // stop any buffering for stdout, so that I can
649 // see the printf statements in my terminal immediately
650 setbuf(stdout, NULL);
652 if(!deviceName)
653 deviceName = sendDevice;
654 else if(strcmp(deviceName, sendDevice) != 0)
655 return ALC_FALSE;
656 data = (send_data*)calloc(1, sizeof(*data));
657 device->szDeviceName = strdup(deviceName);
658 device->ExtraData = data;
659 return ALC_TRUE;
660 }
662 static void send_close_playback(ALCdevice *device)
663 {
664 send_data *data = (send_data*)device->ExtraData;
665 alcMakeContextCurrent(NULL);
666 ALuint i;
667 // Destroy all slave contexts. LWJGL will take care of
668 // its own context.
669 for (i = 1; i < data->numContexts; i++){
670 context_data *ctxData = data->contexts[i];
671 alcDestroyContext(ctxData->ctx);
672 free(ctxData->renderBuffer);
673 free(ctxData);
674 }
675 free(data);
676 device->ExtraData = NULL;
677 }
679 static ALCboolean send_reset_playback(ALCdevice *device)
680 {
681 SetDefaultWFXChannelOrder(device);
682 return ALC_TRUE;
683 }
685 static void send_stop_playback(ALCdevice *Device){
686 UNUSED(Device);
687 }
689 static const BackendFuncs send_funcs = {
690 send_open_playback,
691 send_close_playback,
692 send_reset_playback,
693 send_stop_playback,
694 NULL,
695 NULL, /* These would be filled with functions to */
696 NULL, /* handle capturing audio if we we into that */
697 NULL, /* sort of thing... */
698 NULL,
699 NULL
700 };
702 ALCboolean alc_send_init(BackendFuncs *func_list){
703 *func_list = send_funcs;
704 return ALC_TRUE;
705 }
707 void alc_send_deinit(void){}
709 void alc_send_probe(enum DevProbe type)
710 {
711 switch(type)
712 {
713 case DEVICE_PROBE:
714 AppendDeviceList(sendDevice);
715 break;
716 case ALL_DEVICE_PROBE:
717 AppendAllDeviceList(sendDevice);
718 break;
719 case CAPTURE_DEVICE_PROBE:
720 break;
721 }
722 }
723 #+end_src
725 * The Java interface, =AudioSend=
727 The Java interface to the Send Device follows naturally from the JNI
728 definitions. It is included here for completeness. The only thing here
729 of note is the =deviceID=. This is available from LWJGL, but to only
730 way to get it is reflection. Unfortunately, there is no other way to
731 control the Send device than to obtain a pointer to it.
733 #+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code
735 * Finally, Ears in clojure!
737 Now that the infrastructure is complete (modulo a few patches to
738 jMonkeyEngine3 to support accessing this modified version of =OpenAL=
739 that are not worth discussing), the clojure ear abstraction is rather
740 simple. Just as there were =SceneProcessors= for vision, there are
741 now =SoundProcessors= for hearing.
743 #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java
745 #+srcname: ears
746 #+begin_src clojure
747 (ns cortex.hearing
748 "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple
749 listeners at different positions in the same world. Passes vectors
750 of floats in the range [-1.0 -- 1.0] in PCM format to any arbitrary
751 function."
752 {:author "Robert McIntyre"}
753 (:use (cortex world util))
754 (:import java.nio.ByteBuffer)
755 (:import org.tritonus.share.sampled.FloatSampleTools)
756 (:import com.aurellem.capture.audio.SoundProcessor)
757 (:import javax.sound.sampled.AudioFormat))
759 (defn sound-processor
760 "Deals with converting ByteBuffers into Vectors of floats so that
761 the continuation functions can be defined in terms of immutable
762 stuff."
763 [continuation]
764 (proxy [SoundProcessor] []
765 (cleanup [])
766 (process
767 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
768 (let [bytes (byte-array numSamples)
769 floats (float-array numSamples)]
770 (.get audioSamples bytes 0 numSamples)
771 (FloatSampleTools/byte2floatInterleaved
772 bytes 0 floats 0
773 (/ numSamples (.getFrameSize audioFormat)) audioFormat)
774 (continuation
775 (vec floats))))))
777 (defn add-ear
778 "Add an ear to the world. The continuation function will be called
779 on the FFT or the sounds which the ear hears in the given
780 timeframe. Sound is 3D."
781 [world listener continuation]
782 (let [renderer (.getAudioRenderer world)]
783 (.addListener renderer listener)
784 (.registerSoundProcessor renderer listener
785 (sound-processor continuation))
786 listener))
787 #+end_src
789 * Example
791 #+srcname: test-hearing
792 #+begin_src clojure :results silent
793 (ns test.hearing
794 (:use (cortex world util hearing))
795 (:import (com.jme3.audio AudioNode Listener))
796 (:import com.jme3.scene.Node))
798 (defn setup-fn [world]
799 (let [listener (Listener.)]
800 (add-ear world listener #(println-repl (nth % 0)))))
802 (defn play-sound [node world value]
803 (if (not value)
804 (do
805 (.playSource (.getAudioRenderer world) node))))
807 (defn test-basic-hearing []
808 (.start
809 (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)]
810 (world
811 (Node.)
812 {"key-space" (partial play-sound node1)}
813 setup-fn
814 no-op))))
815 #+end_src
817 This extremely basic program prints out the first sample it encounters
818 at every time stamp. You can see the rendered sound begin printed at
819 the REPL.
821 * COMMENT Code Generation
823 #+begin_src clojure :tangle ../../cortex/src/cortex/hearing.clj
824 <<ears>>
825 #+end_src
827 #+begin_src clojure :tangle ../../cortex/src/test/hearing.clj
828 <<test-hearing>>
829 #+end_src
832 #+begin_src C :tangle ../Alc/backends/send.c
833 <<send>>
834 #+end_src