annotate org/hearing.org @ 169:94b79c191fc7

renamed some functions in vision.org
author Robert McIntyre <rlm@mit.edu>
date Sat, 04 Feb 2012 04:23:32 -0700
parents 9e6a30b8c99a
children facc2ef3fe5c
rev   line source
rlm@162 1 #+title: Simulated Sense of Hearing
rlm@162 2 #+author: Robert McIntyre
rlm@162 3 #+email: rlm@mit.edu
rlm@162 4 #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3
rlm@162 5 #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI
rlm@162 6 #+SETUPFILE: ../../aurellem/org/setup.org
rlm@162 7 #+INCLUDE: ../../aurellem/org/level-0.org
rlm@162 8 #+BABEL: :exports both :noweb yes :cache no :mkdirp yes
rlm@162 9
rlm@162 10 * Hearing
rlm@162 11
rlm@162 12 I want to be able to place ears in a similar manner to how I place
rlm@162 13 the eyes. I want to be able to place ears in a unique spatial
rlm@162 14 position, and receive as output at every tick the F.F.T. of whatever
rlm@162 15 signals are happening at that point.
rlm@162 16
rlm@162 17 Hearing is one of the more difficult senses to simulate, because there
rlm@162 18 is less support for obtaining the actual sound data that is processed
rlm@162 19 by jMonkeyEngine3.
rlm@162 20
rlm@162 21 jMonkeyEngine's sound system works as follows:
rlm@162 22
rlm@162 23 - jMonkeyEngine uses the =AppSettings= for the particular application
rlm@162 24 to determine what sort of =AudioRenderer= should be used.
rlm@162 25 - although some support is provided for multiple AudioRendering
rlm@162 26 backends, jMonkeyEngine at the time of this writing will either
rlm@162 27 pick no AudioRenderer at all, or the =LwjglAudioRenderer=
rlm@162 28 - jMonkeyEngine tries to figure out what sort of system you're
rlm@162 29 running and extracts the appropriate native libraries.
rlm@162 30 - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
rlm@162 31 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
rlm@162 32 - =OpenAL= calculates the 3D sound localization and feeds a stream of
rlm@162 33 sound to any of various sound output devices with which it knows
rlm@162 34 how to communicate.
rlm@162 35
rlm@162 36 A consequence of this is that there's no way to access the actual
rlm@162 37 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
rlm@162 38 one /listener/, which normally isn't a problem for games, but becomes
rlm@162 39 a problem when trying to make multiple AI creatures that can each hear
rlm@162 40 the world from a different perspective.
rlm@162 41
rlm@162 42 To make many AI creatures in jMonkeyEngine that can each hear the
rlm@162 43 world from their own perspective, it is necessary to go all the way
rlm@162 44 back to =OpenAL= and implement support for simulated hearing there.
rlm@162 45
rlm@162 46 * Extending =OpenAL=
rlm@162 47 ** =OpenAL= Devices
rlm@162 48
rlm@162 49 =OpenAL= goes to great lengths to support many different systems, all
rlm@162 50 with different sound capabilities and interfaces. It accomplishes this
rlm@162 51 difficult task by providing code for many different sound backends in
rlm@162 52 pseudo-objects called /Devices/. There's a device for the Linux Open
rlm@162 53 Sound System and the Advanced Linux Sound Architecture, there's one
rlm@162 54 for Direct Sound on Windows, there's even one for Solaris. =OpenAL=
rlm@162 55 solves the problem of platform independence by providing all these
rlm@162 56 Devices.
rlm@162 57
rlm@162 58 Wrapper libraries such as LWJGL are free to examine the system on
rlm@162 59 which they are running and then select an appropriate device for that
rlm@162 60 system.
rlm@162 61
rlm@162 62 There are also a few "special" devices that don't interface with any
rlm@162 63 particular system. These include the Null Device, which doesn't do
rlm@162 64 anything, and the Wave Device, which writes whatever sound it receives
rlm@162 65 to a file, if everything has been set up correctly when configuring
rlm@162 66 =OpenAL=.
rlm@162 67
rlm@162 68 Actual mixing of the sound data happens in the Devices, and they are
rlm@162 69 the only point in the sound rendering process where this data is
rlm@162 70 available.
rlm@162 71
rlm@162 72 Therefore, in order to support multiple listeners, and get the sound
rlm@162 73 data in a form that the AIs can use, it is necessary to create a new
rlm@162 74 Device, which supports this features.
rlm@162 75
rlm@162 76 ** The Send Device
rlm@162 77 Adding a device to OpenAL is rather tricky -- there are five separate
rlm@162 78 files in the =OpenAL= source tree that must be modified to do so. I've
rlm@162 79 documented this process [[./add-new-device.org][here]] for anyone who is interested.
rlm@162 80
rlm@162 81
rlm@162 82 Onward to that actual Device!
rlm@162 83
rlm@162 84 again, my objectives are:
rlm@162 85
rlm@162 86 - Support Multiple Listeners from jMonkeyEngine3
rlm@162 87 - Get access to the rendered sound data for further processing from
rlm@162 88 clojure.
rlm@162 89
rlm@162 90 ** =send.c=
rlm@162 91
rlm@162 92 ** Header
rlm@162 93 #+name: send-header
rlm@162 94 #+begin_src C
rlm@162 95 #include "config.h"
rlm@162 96 #include <stdlib.h>
rlm@162 97 #include "alMain.h"
rlm@162 98 #include "AL/al.h"
rlm@162 99 #include "AL/alc.h"
rlm@162 100 #include "alSource.h"
rlm@162 101 #include <jni.h>
rlm@162 102
rlm@162 103 //////////////////// Summary
rlm@162 104
rlm@162 105 struct send_data;
rlm@162 106 struct context_data;
rlm@162 107
rlm@162 108 static void addContext(ALCdevice *, ALCcontext *);
rlm@162 109 static void syncContexts(ALCcontext *master, ALCcontext *slave);
rlm@162 110 static void syncSources(ALsource *master, ALsource *slave,
rlm@162 111 ALCcontext *masterCtx, ALCcontext *slaveCtx);
rlm@162 112
rlm@162 113 static void syncSourcei(ALuint master, ALuint slave,
rlm@162 114 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
rlm@162 115 static void syncSourcef(ALuint master, ALuint slave,
rlm@162 116 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
rlm@162 117 static void syncSource3f(ALuint master, ALuint slave,
rlm@162 118 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
rlm@162 119
rlm@162 120 static void swapInContext(ALCdevice *, struct context_data *);
rlm@162 121 static void saveContext(ALCdevice *, struct context_data *);
rlm@162 122 static void limitContext(ALCdevice *, ALCcontext *);
rlm@162 123 static void unLimitContext(ALCdevice *);
rlm@162 124
rlm@162 125 static void init(ALCdevice *);
rlm@162 126 static void renderData(ALCdevice *, int samples);
rlm@162 127
rlm@162 128 #define UNUSED(x) (void)(x)
rlm@162 129 #+end_src
rlm@162 130
rlm@162 131 The main idea behind the Send device is to take advantage of the fact
rlm@162 132 that LWJGL only manages one /context/ when using OpenAL. A /context/
rlm@162 133 is like a container that holds samples and keeps track of where the
rlm@162 134 listener is. In order to support multiple listeners, the Send device
rlm@162 135 identifies the LWJGL context as the master context, and creates any
rlm@162 136 number of slave contexts to represent additional listeners. Every
rlm@162 137 time the device renders sound, it synchronizes every source from the
rlm@162 138 master LWJGL context to the slave contexts. Then, it renders each
rlm@162 139 context separately, using a different listener for each one. The
rlm@162 140 rendered sound is made available via JNI to jMonkeyEngine.
rlm@162 141
rlm@162 142 To recap, the process is:
rlm@162 143 - Set the LWJGL context as "master" in the =init()= method.
rlm@162 144 - Create any number of additional contexts via =addContext()=
rlm@162 145 - At every call to =renderData()= sync the master context with the
rlm@162 146 slave contexts with =syncContexts()=
rlm@162 147 - =syncContexts()= calls =syncSources()= to sync all the sources
rlm@162 148 which are in the master context.
rlm@162 149 - =limitContext()= and =unLimitContext()= make it possible to render
rlm@162 150 only one context at a time.
rlm@162 151
rlm@162 152 ** Necessary State
rlm@162 153 #+name: send-state
rlm@162 154 #+begin_src C
rlm@162 155 //////////////////// State
rlm@162 156
rlm@162 157 typedef struct context_data {
rlm@162 158 ALfloat ClickRemoval[MAXCHANNELS];
rlm@162 159 ALfloat PendingClicks[MAXCHANNELS];
rlm@162 160 ALvoid *renderBuffer;
rlm@162 161 ALCcontext *ctx;
rlm@162 162 } context_data;
rlm@162 163
rlm@162 164 typedef struct send_data {
rlm@162 165 ALuint size;
rlm@162 166 context_data **contexts;
rlm@162 167 ALuint numContexts;
rlm@162 168 ALuint maxContexts;
rlm@162 169 } send_data;
rlm@162 170 #+end_src
rlm@162 171
rlm@162 172 Switching between contexts is not the normal operation of a Device,
rlm@162 173 and one of the problems with doing so is that a Device normally keeps
rlm@162 174 around a few pieces of state such as the =ClickRemoval= array above
rlm@162 175 which will become corrupted if the contexts are not done in
rlm@162 176 parallel. The solution is to create a copy of this normally global
rlm@162 177 device state for each context, and copy it back and forth into and out
rlm@162 178 of the actual device state whenever a context is rendered.
rlm@162 179
rlm@162 180 ** Synchronization Macros
rlm@162 181 #+name: sync-macros
rlm@162 182 #+begin_src C
rlm@162 183 //////////////////// Context Creation / Synchronization
rlm@162 184
rlm@162 185 #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \
rlm@162 186 void NAME (ALuint sourceID1, ALuint sourceID2, \
rlm@162 187 ALCcontext *ctx1, ALCcontext *ctx2, \
rlm@162 188 ALenum param){ \
rlm@162 189 INIT_EXPR; \
rlm@162 190 ALCcontext *current = alcGetCurrentContext(); \
rlm@162 191 alcMakeContextCurrent(ctx1); \
rlm@162 192 GET_EXPR; \
rlm@162 193 alcMakeContextCurrent(ctx2); \
rlm@162 194 SET_EXPR; \
rlm@162 195 alcMakeContextCurrent(current); \
rlm@162 196 }
rlm@162 197
rlm@162 198 #define MAKE_SYNC(NAME, TYPE, GET, SET) \
rlm@162 199 _MAKE_SYNC(NAME, \
rlm@162 200 TYPE value, \
rlm@162 201 GET(sourceID1, param, &value), \
rlm@162 202 SET(sourceID2, param, value))
rlm@162 203
rlm@162 204 #define MAKE_SYNC3(NAME, TYPE, GET, SET) \
rlm@162 205 _MAKE_SYNC(NAME, \
rlm@162 206 TYPE value1; TYPE value2; TYPE value3;, \
rlm@162 207 GET(sourceID1, param, &value1, &value2, &value3), \
rlm@162 208 SET(sourceID2, param, value1, value2, value3))
rlm@162 209
rlm@162 210 MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei);
rlm@162 211 MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef);
rlm@162 212 MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i);
rlm@162 213 MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f);
rlm@162 214
rlm@162 215 #+end_src
rlm@162 216
rlm@162 217 Setting the state of an =OpenAL= source is done with the =alSourcei=,
rlm@162 218 =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to
rlm@162 219 completely synchronize two sources, it is necessary to use all of
rlm@162 220 them. These macros help to condense the otherwise repetitive
rlm@162 221 synchronization code involving these similar low-level =OpenAL= functions.
rlm@162 222
rlm@162 223 ** Source Synchronization
rlm@162 224 #+name: sync-sources
rlm@162 225 #+begin_src C
rlm@162 226 void syncSources(ALsource *masterSource, ALsource *slaveSource,
rlm@162 227 ALCcontext *masterCtx, ALCcontext *slaveCtx){
rlm@162 228 ALuint master = masterSource->source;
rlm@162 229 ALuint slave = slaveSource->source;
rlm@162 230 ALCcontext *current = alcGetCurrentContext();
rlm@162 231
rlm@162 232 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
rlm@162 233 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
rlm@162 234 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
rlm@162 235 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
rlm@162 236 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
rlm@162 237 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
rlm@162 238 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
rlm@162 239 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
rlm@162 240 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
rlm@162 241 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
rlm@162 242 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
rlm@162 243 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
rlm@162 244 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
rlm@162 245
rlm@162 246 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
rlm@162 247 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
rlm@162 248 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
rlm@162 249
rlm@162 250 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
rlm@162 251 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
rlm@162 252
rlm@162 253 alcMakeContextCurrent(masterCtx);
rlm@162 254 ALint source_type;
rlm@162 255 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
rlm@162 256
rlm@162 257 // Only static sources are currently synchronized!
rlm@162 258 if (AL_STATIC == source_type){
rlm@162 259 ALint master_buffer;
rlm@162 260 ALint slave_buffer;
rlm@162 261 alGetSourcei(master, AL_BUFFER, &master_buffer);
rlm@162 262 alcMakeContextCurrent(slaveCtx);
rlm@162 263 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
rlm@162 264 if (master_buffer != slave_buffer){
rlm@162 265 alSourcei(slave, AL_BUFFER, master_buffer);
rlm@162 266 }
rlm@162 267 }
rlm@162 268
rlm@162 269 // Synchronize the state of the two sources.
rlm@162 270 alcMakeContextCurrent(masterCtx);
rlm@162 271 ALint masterState;
rlm@162 272 ALint slaveState;
rlm@162 273
rlm@162 274 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
rlm@162 275 alcMakeContextCurrent(slaveCtx);
rlm@162 276 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
rlm@162 277
rlm@162 278 if (masterState != slaveState){
rlm@162 279 switch (masterState){
rlm@162 280 case AL_INITIAL : alSourceRewind(slave); break;
rlm@162 281 case AL_PLAYING : alSourcePlay(slave); break;
rlm@162 282 case AL_PAUSED : alSourcePause(slave); break;
rlm@162 283 case AL_STOPPED : alSourceStop(slave); break;
rlm@162 284 }
rlm@162 285 }
rlm@162 286 // Restore whatever context was previously active.
rlm@162 287 alcMakeContextCurrent(current);
rlm@162 288 }
rlm@162 289 #+end_src
rlm@162 290 This function is long because it has to exhaustively go through all the
rlm@162 291 possible state that a source can have and make sure that it is the
rlm@162 292 same between the master and slave sources. I'd like to take this
rlm@162 293 moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very
rlm@162 294 good description of =OpenAL='s internals.
rlm@162 295
rlm@162 296 ** Context Synchronization
rlm@162 297 #+name: sync-contexts
rlm@162 298 #+begin_src C
rlm@162 299 void syncContexts(ALCcontext *master, ALCcontext *slave){
rlm@162 300 /* If there aren't sufficient sources in slave to mirror
rlm@162 301 the sources in master, create them. */
rlm@162 302 ALCcontext *current = alcGetCurrentContext();
rlm@162 303
rlm@162 304 UIntMap *masterSourceMap = &(master->SourceMap);
rlm@162 305 UIntMap *slaveSourceMap = &(slave->SourceMap);
rlm@162 306 ALuint numMasterSources = masterSourceMap->size;
rlm@162 307 ALuint numSlaveSources = slaveSourceMap->size;
rlm@162 308
rlm@162 309 alcMakeContextCurrent(slave);
rlm@162 310 if (numSlaveSources < numMasterSources){
rlm@162 311 ALuint numMissingSources = numMasterSources - numSlaveSources;
rlm@162 312 ALuint newSources[numMissingSources];
rlm@162 313 alGenSources(numMissingSources, newSources);
rlm@162 314 }
rlm@162 315
rlm@162 316 /* Now, slave is guaranteed to have at least as many sources
rlm@162 317 as master. Sync each source from master to the corresponding
rlm@162 318 source in slave. */
rlm@162 319 int i;
rlm@162 320 for(i = 0; i < masterSourceMap->size; i++){
rlm@162 321 syncSources((ALsource*)masterSourceMap->array[i].value,
rlm@162 322 (ALsource*)slaveSourceMap->array[i].value,
rlm@162 323 master, slave);
rlm@162 324 }
rlm@162 325 alcMakeContextCurrent(current);
rlm@162 326 }
rlm@162 327 #+end_src
rlm@162 328
rlm@162 329 Most of the hard work in Context Synchronization is done in
rlm@162 330 =syncSources()=. The only thing that =syncContexts()= has to worry
rlm@162 331 about is automatically creating new sources whenever a slave context
rlm@162 332 does not have the same number of sources as the master context.
rlm@162 333
rlm@162 334 ** Context Creation
rlm@162 335 #+name: context-creation
rlm@162 336 #+begin_src C
rlm@162 337 static void addContext(ALCdevice *Device, ALCcontext *context){
rlm@162 338 send_data *data = (send_data*)Device->ExtraData;
rlm@162 339 // expand array if necessary
rlm@162 340 if (data->numContexts >= data->maxContexts){
rlm@162 341 ALuint newMaxContexts = data->maxContexts*2 + 1;
rlm@162 342 data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data));
rlm@162 343 data->maxContexts = newMaxContexts;
rlm@162 344 }
rlm@162 345 // create context_data and add it to the main array
rlm@162 346 context_data *ctxData;
rlm@162 347 ctxData = (context_data*)calloc(1, sizeof(*ctxData));
rlm@162 348 ctxData->renderBuffer =
rlm@162 349 malloc(BytesFromDevFmt(Device->FmtType) *
rlm@162 350 Device->NumChan * Device->UpdateSize);
rlm@162 351 ctxData->ctx = context;
rlm@162 352
rlm@162 353 data->contexts[data->numContexts] = ctxData;
rlm@162 354 data->numContexts++;
rlm@162 355 }
rlm@162 356 #+end_src
rlm@162 357
rlm@162 358 Here, the slave context is created, and it's data is stored in the
rlm@162 359 device-wide =ExtraData= structure. The =renderBuffer= that is created
rlm@162 360 here is where the rendered sound samples for this slave context will
rlm@162 361 eventually go.
rlm@162 362
rlm@162 363 ** Context Switching
rlm@162 364 #+name: context-switching
rlm@162 365 #+begin_src C
rlm@162 366 //////////////////// Context Switching
rlm@162 367
rlm@162 368 /* A device brings along with it two pieces of state
rlm@162 369 * which have to be swapped in and out with each context.
rlm@162 370 */
rlm@162 371 static void swapInContext(ALCdevice *Device, context_data *ctxData){
rlm@162 372 memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
rlm@162 373 memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
rlm@162 374 }
rlm@162 375
rlm@162 376 static void saveContext(ALCdevice *Device, context_data *ctxData){
rlm@162 377 memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
rlm@162 378 memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
rlm@162 379 }
rlm@162 380
rlm@162 381 static ALCcontext **currentContext;
rlm@162 382 static ALuint currentNumContext;
rlm@162 383
rlm@162 384 /* By default, all contexts are rendered at once for each call to aluMixData.
rlm@162 385 * This function uses the internals of the ALCdevice struct to temporally
rlm@162 386 * cause aluMixData to only render the chosen context.
rlm@162 387 */
rlm@162 388 static void limitContext(ALCdevice *Device, ALCcontext *ctx){
rlm@162 389 currentContext = Device->Contexts;
rlm@162 390 currentNumContext = Device->NumContexts;
rlm@162 391 Device->Contexts = &ctx;
rlm@162 392 Device->NumContexts = 1;
rlm@162 393 }
rlm@162 394
rlm@162 395 static void unLimitContext(ALCdevice *Device){
rlm@162 396 Device->Contexts = currentContext;
rlm@162 397 Device->NumContexts = currentNumContext;
rlm@162 398 }
rlm@162 399 #+end_src
rlm@162 400
rlm@162 401 =OpenAL= normally renders all Contexts in parallel, outputting the
rlm@162 402 whole result to the buffer. It does this by iterating over the
rlm@162 403 Device->Contexts array and rendering each context to the buffer in
rlm@162 404 turn. By temporally setting Device->NumContexts to 1 and adjusting
rlm@162 405 the Device's context list to put the desired context-to-be-rendered
rlm@162 406 into position 0, we can get trick =OpenAL= into rendering each slave
rlm@162 407 context separate from all the others.
rlm@162 408
rlm@162 409 ** Main Device Loop
rlm@162 410 #+name: main-loop
rlm@162 411 #+begin_src C
rlm@162 412 //////////////////// Main Device Loop
rlm@162 413
rlm@162 414 /* Establish the LWJGL context as the master context, which will
rlm@162 415 * be synchronized to all the slave contexts
rlm@162 416 */
rlm@162 417 static void init(ALCdevice *Device){
rlm@162 418 ALCcontext *masterContext = alcGetCurrentContext();
rlm@162 419 addContext(Device, masterContext);
rlm@162 420 }
rlm@162 421
rlm@162 422
rlm@162 423 static void renderData(ALCdevice *Device, int samples){
rlm@162 424 if(!Device->Connected){return;}
rlm@162 425 send_data *data = (send_data*)Device->ExtraData;
rlm@162 426 ALCcontext *current = alcGetCurrentContext();
rlm@162 427
rlm@162 428 ALuint i;
rlm@162 429 for (i = 1; i < data->numContexts; i++){
rlm@162 430 syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx);
rlm@162 431 }
rlm@162 432
rlm@162 433 if ((ALuint) samples > Device->UpdateSize){
rlm@162 434 printf("exceeding internal buffer size; dropping samples\n");
rlm@162 435 printf("requested %d; available %d\n", samples, Device->UpdateSize);
rlm@162 436 samples = (int) Device->UpdateSize;
rlm@162 437 }
rlm@162 438
rlm@162 439 for (i = 0; i < data->numContexts; i++){
rlm@162 440 context_data *ctxData = data->contexts[i];
rlm@162 441 ALCcontext *ctx = ctxData->ctx;
rlm@162 442 alcMakeContextCurrent(ctx);
rlm@162 443 limitContext(Device, ctx);
rlm@162 444 swapInContext(Device, ctxData);
rlm@162 445 aluMixData(Device, ctxData->renderBuffer, samples);
rlm@162 446 saveContext(Device, ctxData);
rlm@162 447 unLimitContext(Device);
rlm@162 448 }
rlm@162 449 alcMakeContextCurrent(current);
rlm@162 450 }
rlm@162 451 #+end_src
rlm@162 452
rlm@162 453 The main loop synchronizes the master LWJGL context with all the slave
rlm@162 454 contexts, then walks each context, rendering just that context to it's
rlm@162 455 audio-sample storage buffer.
rlm@162 456
rlm@162 457 ** JNI Methods
rlm@162 458
rlm@162 459 At this point, we have the ability to create multiple listeners by
rlm@162 460 using the master/slave context trick, and the rendered audio data is
rlm@162 461 waiting patiently in internal buffers, one for each listener. We need
rlm@162 462 a way to transport this information to Java, and also a way to drive
rlm@162 463 this device from Java. The following JNI interface code is inspired
rlm@162 464 by the way LWJGL interfaces with =OpenAL=.
rlm@162 465
rlm@162 466 *** step
rlm@162 467 #+name: jni-step
rlm@162 468 #+begin_src C
rlm@162 469 //////////////////// JNI Methods
rlm@162 470
rlm@162 471 #include "com_aurellem_send_AudioSend.h"
rlm@162 472
rlm@162 473 /*
rlm@162 474 * Class: com_aurellem_send_AudioSend
rlm@162 475 * Method: nstep
rlm@162 476 * Signature: (JI)V
rlm@162 477 */
rlm@162 478 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep
rlm@162 479 (JNIEnv *env, jclass clazz, jlong device, jint samples){
rlm@162 480 UNUSED(env);UNUSED(clazz);UNUSED(device);
rlm@162 481 renderData((ALCdevice*)((intptr_t)device), samples);
rlm@162 482 }
rlm@162 483 #+end_src
rlm@162 484 This device, unlike most of the other devices in =OpenAL=, does not
rlm@162 485 render sound unless asked. This enables the system to slow down or
rlm@162 486 speed up depending on the needs of the AIs who are using it to
rlm@162 487 listen. If the device tried to render samples in real-time, a
rlm@162 488 complicated AI whose mind takes 100 seconds of computer time to
rlm@162 489 simulate 1 second of AI-time would miss almost all of the sound in
rlm@162 490 its environment.
rlm@162 491
rlm@162 492
rlm@162 493 *** getSamples
rlm@162 494 #+name: jni-get-samples
rlm@162 495 #+begin_src C
rlm@162 496 /*
rlm@162 497 * Class: com_aurellem_send_AudioSend
rlm@162 498 * Method: ngetSamples
rlm@162 499 * Signature: (JLjava/nio/ByteBuffer;III)V
rlm@162 500 */
rlm@162 501 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples
rlm@162 502 (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position,
rlm@162 503 jint samples, jint n){
rlm@162 504 UNUSED(clazz);
rlm@162 505
rlm@162 506 ALvoid *buffer_address =
rlm@162 507 ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position));
rlm@162 508 ALCdevice *recorder = (ALCdevice*) ((intptr_t)device);
rlm@162 509 send_data *data = (send_data*)recorder->ExtraData;
rlm@162 510 if ((ALuint)n > data->numContexts){return;}
rlm@162 511 memcpy(buffer_address, data->contexts[n]->renderBuffer,
rlm@162 512 BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples);
rlm@162 513 }
rlm@162 514 #+end_src
rlm@162 515
rlm@162 516 This is the transport layer between C and Java that will eventually
rlm@162 517 allow us to access rendered sound data from clojure.
rlm@162 518
rlm@162 519 *** Listener Management
rlm@162 520
rlm@162 521 =addListener=, =setNthListenerf=, and =setNthListener3f= are
rlm@162 522 necessary to change the properties of any listener other than the
rlm@162 523 master one, since only the listener of the current active context is
rlm@162 524 affected by the normal =OpenAL= listener calls.
rlm@162 525 #+name: listener-manage
rlm@162 526 #+begin_src C
rlm@162 527 /*
rlm@162 528 * Class: com_aurellem_send_AudioSend
rlm@162 529 * Method: naddListener
rlm@162 530 * Signature: (J)V
rlm@162 531 */
rlm@162 532 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener
rlm@162 533 (JNIEnv *env, jclass clazz, jlong device){
rlm@162 534 UNUSED(env); UNUSED(clazz);
rlm@162 535 //printf("creating new context via naddListener\n");
rlm@162 536 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
rlm@162 537 ALCcontext *new = alcCreateContext(Device, NULL);
rlm@162 538 addContext(Device, new);
rlm@162 539 }
rlm@162 540
rlm@162 541 /*
rlm@162 542 * Class: com_aurellem_send_AudioSend
rlm@162 543 * Method: nsetNthListener3f
rlm@162 544 * Signature: (IFFFJI)V
rlm@162 545 */
rlm@162 546 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f
rlm@162 547 (JNIEnv *env, jclass clazz, jint param,
rlm@162 548 jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){
rlm@162 549 UNUSED(env);UNUSED(clazz);
rlm@162 550
rlm@162 551 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
rlm@162 552 send_data *data = (send_data*)Device->ExtraData;
rlm@162 553
rlm@162 554 ALCcontext *current = alcGetCurrentContext();
rlm@162 555 if ((ALuint)contextNum > data->numContexts){return;}
rlm@162 556 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
rlm@162 557 alListener3f(param, v1, v2, v3);
rlm@162 558 alcMakeContextCurrent(current);
rlm@162 559 }
rlm@162 560
rlm@162 561 /*
rlm@162 562 * Class: com_aurellem_send_AudioSend
rlm@162 563 * Method: nsetNthListenerf
rlm@162 564 * Signature: (IFJI)V
rlm@162 565 */
rlm@162 566 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf
rlm@162 567 (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device,
rlm@162 568 jint contextNum){
rlm@162 569
rlm@162 570 UNUSED(env);UNUSED(clazz);
rlm@162 571
rlm@162 572 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
rlm@162 573 send_data *data = (send_data*)Device->ExtraData;
rlm@162 574
rlm@162 575 ALCcontext *current = alcGetCurrentContext();
rlm@162 576 if ((ALuint)contextNum > data->numContexts){return;}
rlm@162 577 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
rlm@162 578 alListenerf(param, v1);
rlm@162 579 alcMakeContextCurrent(current);
rlm@162 580 }
rlm@162 581 #+end_src
rlm@162 582
rlm@162 583 *** Initialization
rlm@162 584 =initDevice= is called from the Java side after LWJGL has created its
rlm@162 585 context, and before any calls to =addListener=. It establishes the
rlm@162 586 LWJGL context as the master context.
rlm@162 587
rlm@162 588 =getAudioFormat= is a convenience function that uses JNI to build up a
rlm@162 589 =javax.sound.sampled.AudioFormat= object from data in the Device. This
rlm@162 590 way, there is no ambiguity about what the bits created by =step= and
rlm@162 591 returned by =getSamples= mean.
rlm@162 592 #+name: jni-init
rlm@162 593 #+begin_src C
rlm@162 594 /*
rlm@162 595 * Class: com_aurellem_send_AudioSend
rlm@162 596 * Method: ninitDevice
rlm@162 597 * Signature: (J)V
rlm@162 598 */
rlm@162 599 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice
rlm@162 600 (JNIEnv *env, jclass clazz, jlong device){
rlm@162 601 UNUSED(env);UNUSED(clazz);
rlm@162 602 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
rlm@162 603 init(Device);
rlm@162 604 }
rlm@162 605
rlm@162 606 /*
rlm@162 607 * Class: com_aurellem_send_AudioSend
rlm@162 608 * Method: ngetAudioFormat
rlm@162 609 * Signature: (J)Ljavax/sound/sampled/AudioFormat;
rlm@162 610 */
rlm@162 611 JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat
rlm@162 612 (JNIEnv *env, jclass clazz, jlong device){
rlm@162 613 UNUSED(clazz);
rlm@162 614 jclass AudioFormatClass =
rlm@162 615 (*env)->FindClass(env, "javax/sound/sampled/AudioFormat");
rlm@162 616 jmethodID AudioFormatConstructor =
rlm@162 617 (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V");
rlm@162 618
rlm@162 619 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
rlm@162 620 int isSigned;
rlm@162 621 switch (Device->FmtType)
rlm@162 622 {
rlm@162 623 case DevFmtUByte:
rlm@162 624 case DevFmtUShort: isSigned = 0; break;
rlm@162 625 default : isSigned = 1;
rlm@162 626 }
rlm@162 627 float frequency = Device->Frequency;
rlm@162 628 int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType));
rlm@162 629 int channels = Device->NumChan;
rlm@162 630 jobject format = (*env)->
rlm@162 631 NewObject(
rlm@162 632 env,AudioFormatClass,AudioFormatConstructor,
rlm@162 633 frequency,
rlm@162 634 bitsPerFrame,
rlm@162 635 channels,
rlm@162 636 isSigned,
rlm@162 637 0);
rlm@162 638 return format;
rlm@162 639 }
rlm@162 640 #+end_src
rlm@162 641
rlm@162 642 ** Boring Device management stuff
rlm@162 643 This code is more-or-less copied verbatim from the other =OpenAL=
rlm@162 644 backends. It's the basis for =OpenAL='s primitive object system.
rlm@162 645 #+name: device-init
rlm@162 646 #+begin_src C
rlm@162 647 //////////////////// Device Initialization / Management
rlm@162 648
rlm@162 649 static const ALCchar sendDevice[] = "Multiple Audio Send";
rlm@162 650
rlm@162 651 static ALCboolean send_open_playback(ALCdevice *device,
rlm@162 652 const ALCchar *deviceName)
rlm@162 653 {
rlm@162 654 send_data *data;
rlm@162 655 // stop any buffering for stdout, so that I can
rlm@162 656 // see the printf statements in my terminal immediately
rlm@162 657 setbuf(stdout, NULL);
rlm@162 658
rlm@162 659 if(!deviceName)
rlm@162 660 deviceName = sendDevice;
rlm@162 661 else if(strcmp(deviceName, sendDevice) != 0)
rlm@162 662 return ALC_FALSE;
rlm@162 663 data = (send_data*)calloc(1, sizeof(*data));
rlm@162 664 device->szDeviceName = strdup(deviceName);
rlm@162 665 device->ExtraData = data;
rlm@162 666 return ALC_TRUE;
rlm@162 667 }
rlm@162 668
rlm@162 669 static void send_close_playback(ALCdevice *device)
rlm@162 670 {
rlm@162 671 send_data *data = (send_data*)device->ExtraData;
rlm@162 672 alcMakeContextCurrent(NULL);
rlm@162 673 ALuint i;
rlm@162 674 // Destroy all slave contexts. LWJGL will take care of
rlm@162 675 // its own context.
rlm@162 676 for (i = 1; i < data->numContexts; i++){
rlm@162 677 context_data *ctxData = data->contexts[i];
rlm@162 678 alcDestroyContext(ctxData->ctx);
rlm@162 679 free(ctxData->renderBuffer);
rlm@162 680 free(ctxData);
rlm@162 681 }
rlm@162 682 free(data);
rlm@162 683 device->ExtraData = NULL;
rlm@162 684 }
rlm@162 685
rlm@162 686 static ALCboolean send_reset_playback(ALCdevice *device)
rlm@162 687 {
rlm@162 688 SetDefaultWFXChannelOrder(device);
rlm@162 689 return ALC_TRUE;
rlm@162 690 }
rlm@162 691
rlm@162 692 static void send_stop_playback(ALCdevice *Device){
rlm@162 693 UNUSED(Device);
rlm@162 694 }
rlm@162 695
rlm@162 696 static const BackendFuncs send_funcs = {
rlm@162 697 send_open_playback,
rlm@162 698 send_close_playback,
rlm@162 699 send_reset_playback,
rlm@162 700 send_stop_playback,
rlm@162 701 NULL,
rlm@162 702 NULL, /* These would be filled with functions to */
rlm@162 703 NULL, /* handle capturing audio if we we into that */
rlm@162 704 NULL, /* sort of thing... */
rlm@162 705 NULL,
rlm@162 706 NULL
rlm@162 707 };
rlm@162 708
rlm@162 709 ALCboolean alc_send_init(BackendFuncs *func_list){
rlm@162 710 *func_list = send_funcs;
rlm@162 711 return ALC_TRUE;
rlm@162 712 }
rlm@162 713
rlm@162 714 void alc_send_deinit(void){}
rlm@162 715
rlm@162 716 void alc_send_probe(enum DevProbe type)
rlm@162 717 {
rlm@162 718 switch(type)
rlm@162 719 {
rlm@162 720 case DEVICE_PROBE:
rlm@162 721 AppendDeviceList(sendDevice);
rlm@162 722 break;
rlm@162 723 case ALL_DEVICE_PROBE:
rlm@162 724 AppendAllDeviceList(sendDevice);
rlm@162 725 break;
rlm@162 726 case CAPTURE_DEVICE_PROBE:
rlm@162 727 break;
rlm@162 728 }
rlm@162 729 }
rlm@162 730 #+end_src
rlm@162 731
rlm@162 732 * The Java interface, =AudioSend=
rlm@162 733
rlm@162 734 The Java interface to the Send Device follows naturally from the JNI
rlm@162 735 definitions. It is included here for completeness. The only thing here
rlm@162 736 of note is the =deviceID=. This is available from LWJGL, but to only
rlm@162 737 way to get it is reflection. Unfortunately, there is no other way to
rlm@162 738 control the Send device than to obtain a pointer to it.
rlm@162 739
rlm@162 740 #+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code
rlm@162 741
rlm@162 742 * Finally, Ears in clojure!
rlm@162 743
rlm@167 744 Now that the infrastructure is complete the clojure ear abstraction is
rlm@167 745 simple. Just as there were =SceneProcessors= for vision, there are
rlm@162 746 now =SoundProcessors= for hearing.
rlm@162 747
rlm@162 748 #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java
rlm@162 749
rlm@164 750
rlm@164 751
rlm@162 752 #+name: ears
rlm@162 753 #+begin_src clojure
rlm@162 754 (ns cortex.hearing
rlm@162 755 "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple
rlm@164 756 listeners at different positions in the same world. Automatically
rlm@164 757 reads ear-nodes from specially prepared blender files and
rlm@164 758 instantiates them in the world as actual ears."
rlm@162 759 {:author "Robert McIntyre"}
rlm@162 760 (:use (cortex world util sense))
rlm@165 761 (:use clojure.contrib.def)
rlm@162 762 (:import java.nio.ByteBuffer)
rlm@162 763 (:import org.tritonus.share.sampled.FloatSampleTools)
rlm@165 764 (:import (com.aurellem.capture.audio
rlm@165 765 SoundProcessor AudioSendRenderer))
rlm@165 766 (:import javax.sound.sampled.AudioFormat)
rlm@165 767 (:import (com.jme3.scene Spatial Node))
rlm@165 768 (:import com.jme3.audio.Listener)
rlm@165 769 (:import com.jme3.app.Application)
rlm@165 770 (:import com.jme3.scene.control.AbstractControl))
rlm@162 771
rlm@162 772 (defn sound-processor
rlm@162 773 "Deals with converting ByteBuffers into Vectors of floats so that
rlm@162 774 the continuation functions can be defined in terms of immutable
rlm@162 775 stuff."
rlm@162 776 [continuation]
rlm@162 777 (proxy [SoundProcessor] []
rlm@162 778 (cleanup [])
rlm@162 779 (process
rlm@162 780 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
rlm@162 781 (let [bytes (byte-array numSamples)
rlm@162 782 num-floats (/ numSamples (.getFrameSize audioFormat))
rlm@162 783 floats (float-array num-floats)]
rlm@162 784 (.get audioSamples bytes 0 numSamples)
rlm@162 785 (FloatSampleTools/byte2floatInterleaved
rlm@162 786 bytes 0 floats 0 num-floats audioFormat)
rlm@162 787 (continuation
rlm@162 788 (vec floats))))))
rlm@162 789
rlm@164 790 (defvar
rlm@164 791 ^{:arglists '([creature])}
rlm@164 792 ears
rlm@164 793 (sense-nodes "ears")
rlm@164 794 "Return the children of the creature's \"ears\" node.")
rlm@162 795
rlm@163 796 (defn update-listener-velocity!
rlm@162 797 "Update the listener's velocity every update loop."
rlm@162 798 [#^Spatial obj #^Listener lis]
rlm@162 799 (let [old-position (atom (.getLocation lis))]
rlm@162 800 (.addControl
rlm@162 801 obj
rlm@162 802 (proxy [AbstractControl] []
rlm@162 803 (controlUpdate [tpf]
rlm@162 804 (let [new-position (.getLocation lis)]
rlm@162 805 (.setVelocity
rlm@162 806 lis
rlm@162 807 (.mult (.subtract new-position @old-position)
rlm@162 808 (float (/ tpf))))
rlm@162 809 (reset! old-position new-position)))
rlm@162 810 (controlRender [_ _])))))
rlm@162 811
rlm@169 812 (defn add-ear!
rlm@164 813 "Create a Listener centered on the current position of 'ear
rlm@164 814 which follows the closest physical node in 'creature and
rlm@164 815 sends sound data to 'continuation."
rlm@162 816 [#^Application world #^Node creature #^Spatial ear continuation]
rlm@162 817 (let [target (closest-node creature ear)
rlm@162 818 lis (Listener.)
rlm@162 819 audio-renderer (.getAudioRenderer world)
rlm@162 820 sp (sound-processor continuation)]
rlm@162 821 (.setLocation lis (.getWorldTranslation ear))
rlm@162 822 (.setRotation lis (.getWorldRotation ear))
rlm@162 823 (bind-sense target lis)
rlm@163 824 (update-listener-velocity! target lis)
rlm@162 825 (.addListener audio-renderer lis)
rlm@162 826 (.registerSoundProcessor audio-renderer lis sp)))
rlm@162 827
rlm@163 828 (defn hearing-fn
rlm@164 829 "Returns a functon which returns auditory sensory data when called
rlm@164 830 inside a running simulation."
rlm@162 831 [#^Node creature #^Spatial ear]
rlm@164 832 (let [hearing-data (atom [])
rlm@164 833 register-listener!
rlm@164 834 (runonce
rlm@164 835 (fn [#^Application world]
rlm@169 836 (add-ear!
rlm@164 837 world creature ear
rlm@164 838 (fn [data]
rlm@164 839 (reset! hearing-data (vec data))))))]
rlm@164 840 (fn [#^Application world]
rlm@164 841 (register-listener! world)
rlm@164 842 (let [data @hearing-data
rlm@164 843 topology
rlm@164 844 (vec (map #(vector % 0) (range 0 (count data))))
rlm@164 845 scaled-data
rlm@164 846 (vec
rlm@164 847 (map
rlm@164 848 #(rem (int (* 255 (/ (+ 1 %) 2))) 256)
rlm@164 849 data))]
rlm@164 850 [topology scaled-data]))))
rlm@164 851
rlm@163 852 (defn hearing!
rlm@164 853 "Endow the creature in a particular world with the sense of
rlm@164 854 hearing. Will return a sequence of functions, one for each ear,
rlm@164 855 which when called will return the auditory data from that ear."
rlm@162 856 [#^Node creature]
rlm@164 857 (for [ear (ears creature)]
rlm@164 858 (hearing-fn creature ear)))
rlm@162 859
rlm@162 860 #+end_src
rlm@162 861
rlm@162 862 * Example
rlm@162 863
rlm@162 864 #+name: test-hearing
rlm@162 865 #+begin_src clojure :results silent
rlm@162 866 (ns cortex.test.hearing
rlm@162 867 (:use (cortex world util hearing))
rlm@162 868 (:import (com.jme3.audio AudioNode Listener))
rlm@162 869 (:import com.jme3.scene.Node
rlm@162 870 com.jme3.system.AppSettings))
rlm@162 871
rlm@162 872 (defn setup-fn [world]
rlm@162 873 (let [listener (Listener.)]
rlm@162 874 (add-ear world listener #(println-repl (nth % 0)))))
rlm@162 875
rlm@162 876 (defn play-sound [node world value]
rlm@162 877 (if (not value)
rlm@162 878 (do
rlm@162 879 (.playSource (.getAudioRenderer world) node))))
rlm@162 880
rlm@162 881 (defn test-basic-hearing []
rlm@162 882 (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)]
rlm@162 883 (world
rlm@162 884 (Node.)
rlm@162 885 {"key-space" (partial play-sound node1)}
rlm@162 886 setup-fn
rlm@162 887 no-op)))
rlm@162 888
rlm@162 889 (defn test-advanced-hearing
rlm@162 890 "Testing hearing:
rlm@162 891 You should see a blue sphere flying around several
rlm@162 892 cubes. As the sphere approaches each cube, it turns
rlm@162 893 green."
rlm@162 894 []
rlm@162 895 (doto (com.aurellem.capture.examples.Advanced.)
rlm@162 896 (.setSettings
rlm@162 897 (doto (AppSettings. true)
rlm@162 898 (.setAudioRenderer "Send")))
rlm@162 899 (.setShowSettings false)
rlm@162 900 (.setPauseOnLostFocus false)))
rlm@162 901
rlm@162 902 #+end_src
rlm@162 903
rlm@162 904 This extremely basic program prints out the first sample it encounters
rlm@162 905 at every time stamp. You can see the rendered sound being printed at
rlm@162 906 the REPL.
rlm@162 907
rlm@162 908 - As a bonus, this method of capturing audio for AI can also be used
rlm@162 909 to capture perfect audio from a jMonkeyEngine application, for use
rlm@162 910 in demos and the like.
rlm@162 911
rlm@162 912
rlm@162 913 * COMMENT Code Generation
rlm@162 914
rlm@163 915 #+begin_src clojure :tangle ../src/cortex/hearing.clj
rlm@162 916 <<ears>>
rlm@162 917 #+end_src
rlm@162 918
rlm@162 919 #+begin_src clojure :tangle ../src/cortex/test/hearing.clj
rlm@162 920 <<test-hearing>>
rlm@162 921 #+end_src
rlm@162 922
rlm@162 923 #+begin_src C :tangle ../../audio-send/Alc/backends/send.c
rlm@162 924 <<send-header>>
rlm@162 925 <<send-state>>
rlm@162 926 <<sync-macros>>
rlm@162 927 <<sync-sources>>
rlm@162 928 <<sync-contexts>>
rlm@162 929 <<context-creation>>
rlm@162 930 <<context-switching>>
rlm@162 931 <<main-loop>>
rlm@162 932 <<jni-step>>
rlm@162 933 <<jni-get-samples>>
rlm@162 934 <<listener-manage>>
rlm@162 935 <<jni-init>>
rlm@162 936 <<device-init>>
rlm@162 937 #+end_src
rlm@162 938
rlm@162 939