Mercurial > audio-send
diff org/ear.org @ 15:19ff95c69cf5
moved send.c to org file
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Thu, 03 Nov 2011 12:08:39 -0700 |
parents | c41d773a85fb |
children | 1e201037f666 |
line wrap: on
line diff
1.1 --- a/org/ear.org Mon Oct 31 08:01:08 2011 -0700 1.2 +++ b/org/ear.org Thu Nov 03 12:08:39 2011 -0700 1.3 @@ -1,25 +1,617 @@ 1.4 -#+title: The EARS! 1.5 +#+title: Simulated Sense of Hearing 1.6 #+author: Robert McIntyre 1.7 #+email: rlm@mit.edu 1.8 -#+MATHJAX: align:"left" mathml:t path:"../aurellem/src/MathJax/MathJax.js" 1.9 -#+STYLE: <link rel="stylesheet" type="text/css" href="../aurellem/src/css/aurellem.css"/> 1.10 +#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 1.11 +#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI 1.12 +#+SETUPFILE: ../../aurellem/org/setup.org 1.13 +#+INCLUDE: ../../aurellem/org/level-0.org 1.14 #+BABEL: :exports both :noweb yes :cache no :mkdirp yes 1.15 -#+INCLUDE: ../aurellem/src/templates/level-0.org 1.16 -#+description: Simulating multiple listeners and the sense of hearing in uMonkeyEngine3 1.17 1.18 1.19 1.20 1.21 -* Ears! 1.22 +* Hearing 1.23 1.24 I want to be able to place ears in a similiar manner to how I place 1.25 the eyes. I want to be able to place ears in a unique spatial 1.26 position, and recieve as output at every tick the FFT of whatever 1.27 signals are happening at that point. 1.28 1.29 -#+srcname: ear-header 1.30 +Hearing is one of the more difficult senses to simulate, because there 1.31 +is less support for obtaining the actual sound data that is processed 1.32 +by jMonkeyEngine3. 1.33 + 1.34 +jMonkeyEngine's sound system works as follows: 1.35 + 1.36 + - jMonkeyEngine uese the =AppSettings= for the particular application 1.37 + to determine what sort of =AudioRenderer= should be used. 1.38 + - although some support is provided for multiple AudioRendering 1.39 + backends, jMonkeyEngine at the time of this writing will either 1.40 + pick no AudioRender at all, or the =LwjglAudioRenderer= 1.41 + - jMonkeyEngine tries to figure out what sort of system you're 1.42 + running and extracts the appropiate native libraries. 1.43 + - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (lightweight java game 1.44 + library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] 1.45 + - =OpenAL= calculates the 3D sound localization and feeds a stream of 1.46 + sound to any of various sound output devices with which it knows 1.47 + how to communicate. 1.48 + 1.49 +A consequence of this is that there's no way to access the actual 1.50 +sound data produced by =OpenAL=. Even worse, =OpanAL= only supports 1.51 +one /listener/, which normally isn't a problem for games, but becomes 1.52 +a problem when trying to make multiple AI creatures that can each hear 1.53 +the world from a different perspective. 1.54 + 1.55 +To make many AI creatures in jMonkeyEngine that can each hear the 1.56 +world from their own perspective, it is necessary to go all the way 1.57 +back to =OpenAL= and implement support for simulated hearing there. 1.58 + 1.59 +** =OpenAL= Devices 1.60 + 1.61 +=OpenAL= goes to great lengths to support many different systems, all 1.62 +with different sound capabilities and interfaces. It acomplishes this 1.63 +difficult task by providing code for many different sound backends in 1.64 +pseudo-objects called /Devices/. There's a device for the Linux Open 1.65 +Sound System and the Advanced Linxu Sound Architechture, there's one 1.66 +for Direct Sound on Windows, there's even one for Solaris. =OpenAL= 1.67 +solves the problem of platform independence by providing all these 1.68 +Devices. 1.69 + 1.70 +Wrapper libraries such as LWJGL are free to examine the system on 1.71 +which they are running and then select an appropiate device for that 1.72 +system. 1.73 + 1.74 +There are also a few "special" devices that don't interface with any 1.75 +particular system. These include the Null Device, which doesn't do 1.76 +anything, and the Wave Device, which writes whatever sound it recieves 1.77 +to a file, if everything has been set up correctly when configuring 1.78 +=OpenAL=. 1.79 + 1.80 +Actual mixing of the sound data happens in the Devices, and they are 1.81 +the only point in the sound rendering process where this data is 1.82 +available. 1.83 + 1.84 +Therefore, in order to support multiple listeners, and get the sound 1.85 +data in a form that the AIs can use, it is necessary to create a new 1.86 +Device, which supports this features. 1.87 + 1.88 + 1.89 +** The Send Device 1.90 +Adding a device to OpenAL is rather tricky -- there are five separate 1.91 +files in the =OpenAL= source tree that must be modified to do so. I've 1.92 +documented this process [[./add-new-device.org][here]] for anyone who is interested. 1.93 + 1.94 +#+srcname: send 1.95 +#+begin_src C 1.96 +#include "config.h" 1.97 +#include <stdlib.h> 1.98 +#include "alMain.h" 1.99 +#include "AL/al.h" 1.100 +#include "AL/alc.h" 1.101 +#include "alSource.h" 1.102 +#include <jni.h> 1.103 + 1.104 +//////////////////// Summary 1.105 + 1.106 +struct send_data; 1.107 +struct context_data; 1.108 + 1.109 +static void addContext(ALCdevice *, ALCcontext *); 1.110 +static void syncContexts(ALCcontext *master, ALCcontext *slave); 1.111 +static void syncSources(ALsource *master, ALsource *slave, 1.112 + ALCcontext *masterCtx, ALCcontext *slaveCtx); 1.113 + 1.114 +static void syncSourcei(ALuint master, ALuint slave, 1.115 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.116 +static void syncSourcef(ALuint master, ALuint slave, 1.117 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.118 +static void syncSource3f(ALuint master, ALuint slave, 1.119 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.120 + 1.121 +static void swapInContext(ALCdevice *, struct context_data *); 1.122 +static void saveContext(ALCdevice *, struct context_data *); 1.123 +static void limitContext(ALCdevice *, ALCcontext *); 1.124 +static void unLimitContext(ALCdevice *); 1.125 + 1.126 +static void init(ALCdevice *); 1.127 +static void renderData(ALCdevice *, int samples); 1.128 + 1.129 +#define UNUSED(x) (void)(x) 1.130 + 1.131 +//////////////////// State 1.132 + 1.133 +typedef struct context_data { 1.134 + ALfloat ClickRemoval[MAXCHANNELS]; 1.135 + ALfloat PendingClicks[MAXCHANNELS]; 1.136 + ALvoid *renderBuffer; 1.137 + ALCcontext *ctx; 1.138 +} context_data; 1.139 + 1.140 +typedef struct send_data { 1.141 + ALuint size; 1.142 + context_data **contexts; 1.143 + ALuint numContexts; 1.144 + ALuint maxContexts; 1.145 +} send_data; 1.146 + 1.147 + 1.148 + 1.149 +//////////////////// Context Creation / Synchronization 1.150 + 1.151 +#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ 1.152 + void NAME (ALuint sourceID1, ALuint sourceID2, \ 1.153 + ALCcontext *ctx1, ALCcontext *ctx2, \ 1.154 + ALenum param){ \ 1.155 + INIT_EXPR; \ 1.156 + ALCcontext *current = alcGetCurrentContext(); \ 1.157 + alcMakeContextCurrent(ctx1); \ 1.158 + GET_EXPR; \ 1.159 + alcMakeContextCurrent(ctx2); \ 1.160 + SET_EXPR; \ 1.161 + alcMakeContextCurrent(current); \ 1.162 + } 1.163 + 1.164 +#define MAKE_SYNC(NAME, TYPE, GET, SET) \ 1.165 + _MAKE_SYNC(NAME, \ 1.166 + TYPE value, \ 1.167 + GET(sourceID1, param, &value), \ 1.168 + SET(sourceID2, param, value)) 1.169 + 1.170 +#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ 1.171 + _MAKE_SYNC(NAME, \ 1.172 + TYPE value1; TYPE value2; TYPE value3;, \ 1.173 + GET(sourceID1, param, &value1, &value2, &value3), \ 1.174 + SET(sourceID2, param, value1, value2, value3)) 1.175 + 1.176 +MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); 1.177 +MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); 1.178 +MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); 1.179 +MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); 1.180 + 1.181 +void syncSources(ALsource *masterSource, ALsource *slaveSource, 1.182 + ALCcontext *masterCtx, ALCcontext *slaveCtx){ 1.183 + ALuint master = masterSource->source; 1.184 + ALuint slave = slaveSource->source; 1.185 + ALCcontext *current = alcGetCurrentContext(); 1.186 + 1.187 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); 1.188 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); 1.189 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); 1.190 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); 1.191 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); 1.192 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); 1.193 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); 1.194 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); 1.195 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); 1.196 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); 1.197 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); 1.198 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); 1.199 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); 1.200 + 1.201 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); 1.202 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); 1.203 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); 1.204 + 1.205 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); 1.206 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); 1.207 + 1.208 + alcMakeContextCurrent(masterCtx); 1.209 + ALint source_type; 1.210 + alGetSourcei(master, AL_SOURCE_TYPE, &source_type); 1.211 + 1.212 + // Only static sources are currently synchronized! 1.213 + if (AL_STATIC == source_type){ 1.214 + ALint master_buffer; 1.215 + ALint slave_buffer; 1.216 + alGetSourcei(master, AL_BUFFER, &master_buffer); 1.217 + alcMakeContextCurrent(slaveCtx); 1.218 + alGetSourcei(slave, AL_BUFFER, &slave_buffer); 1.219 + if (master_buffer != slave_buffer){ 1.220 + alSourcei(slave, AL_BUFFER, master_buffer); 1.221 + } 1.222 + } 1.223 + 1.224 + // Synchronize the state of the two sources. 1.225 + alcMakeContextCurrent(masterCtx); 1.226 + ALint masterState; 1.227 + ALint slaveState; 1.228 + 1.229 + alGetSourcei(master, AL_SOURCE_STATE, &masterState); 1.230 + alcMakeContextCurrent(slaveCtx); 1.231 + alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); 1.232 + 1.233 + if (masterState != slaveState){ 1.234 + switch (masterState){ 1.235 + case AL_INITIAL : alSourceRewind(slave); break; 1.236 + case AL_PLAYING : alSourcePlay(slave); break; 1.237 + case AL_PAUSED : alSourcePause(slave); break; 1.238 + case AL_STOPPED : alSourceStop(slave); break; 1.239 + } 1.240 + } 1.241 + // Restore whatever context was previously active. 1.242 + alcMakeContextCurrent(current); 1.243 +} 1.244 + 1.245 + 1.246 +void syncContexts(ALCcontext *master, ALCcontext *slave){ 1.247 + /* If there aren't sufficient sources in slave to mirror 1.248 + the sources in master, create them. */ 1.249 + ALCcontext *current = alcGetCurrentContext(); 1.250 + 1.251 + UIntMap *masterSourceMap = &(master->SourceMap); 1.252 + UIntMap *slaveSourceMap = &(slave->SourceMap); 1.253 + ALuint numMasterSources = masterSourceMap->size; 1.254 + ALuint numSlaveSources = slaveSourceMap->size; 1.255 + 1.256 + alcMakeContextCurrent(slave); 1.257 + if (numSlaveSources < numMasterSources){ 1.258 + ALuint numMissingSources = numMasterSources - numSlaveSources; 1.259 + ALuint newSources[numMissingSources]; 1.260 + alGenSources(numMissingSources, newSources); 1.261 + } 1.262 + 1.263 + /* Now, slave is gauranteed to have at least as many sources 1.264 + as master. Sync each source from master to the corresponding 1.265 + source in slave. */ 1.266 + int i; 1.267 + for(i = 0; i < masterSourceMap->size; i++){ 1.268 + syncSources((ALsource*)masterSourceMap->array[i].value, 1.269 + (ALsource*)slaveSourceMap->array[i].value, 1.270 + master, slave); 1.271 + } 1.272 + alcMakeContextCurrent(current); 1.273 +} 1.274 + 1.275 +static void addContext(ALCdevice *Device, ALCcontext *context){ 1.276 + send_data *data = (send_data*)Device->ExtraData; 1.277 + // expand array if necessary 1.278 + if (data->numContexts >= data->maxContexts){ 1.279 + ALuint newMaxContexts = data->maxContexts*2 + 1; 1.280 + data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); 1.281 + data->maxContexts = newMaxContexts; 1.282 + } 1.283 + // create context_data and add it to the main array 1.284 + context_data *ctxData; 1.285 + ctxData = (context_data*)calloc(1, sizeof(*ctxData)); 1.286 + ctxData->renderBuffer = 1.287 + malloc(BytesFromDevFmt(Device->FmtType) * 1.288 + Device->NumChan * Device->UpdateSize); 1.289 + ctxData->ctx = context; 1.290 + 1.291 + data->contexts[data->numContexts] = ctxData; 1.292 + data->numContexts++; 1.293 +} 1.294 + 1.295 + 1.296 +//////////////////// Context Switching 1.297 + 1.298 +/* A device brings along with it two pieces of state 1.299 + * which have to be swapped in and out with each context. 1.300 + */ 1.301 +static void swapInContext(ALCdevice *Device, context_data *ctxData){ 1.302 + memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.303 + memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.304 +} 1.305 + 1.306 +static void saveContext(ALCdevice *Device, context_data *ctxData){ 1.307 + memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.308 + memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.309 +} 1.310 + 1.311 +static ALCcontext **currentContext; 1.312 +static ALuint currentNumContext; 1.313 + 1.314 +/* By default, all contexts are rendered at once for each call to aluMixData. 1.315 + * This function uses the internals of the ALCdecice struct to temporarly 1.316 + * cause aluMixData to only render the chosen context. 1.317 + */ 1.318 +static void limitContext(ALCdevice *Device, ALCcontext *ctx){ 1.319 + currentContext = Device->Contexts; 1.320 + currentNumContext = Device->NumContexts; 1.321 + Device->Contexts = &ctx; 1.322 + Device->NumContexts = 1; 1.323 +} 1.324 + 1.325 +static void unLimitContext(ALCdevice *Device){ 1.326 + Device->Contexts = currentContext; 1.327 + Device->NumContexts = currentNumContext; 1.328 +} 1.329 + 1.330 + 1.331 +//////////////////// Main Device Loop 1.332 + 1.333 +/* Establish the LWJGL context as the main context, which will 1.334 + * be synchronized to all the slave contexts 1.335 + */ 1.336 +static void init(ALCdevice *Device){ 1.337 + ALCcontext *masterContext = alcGetCurrentContext(); 1.338 + addContext(Device, masterContext); 1.339 +} 1.340 + 1.341 + 1.342 +static void renderData(ALCdevice *Device, int samples){ 1.343 + if(!Device->Connected){return;} 1.344 + send_data *data = (send_data*)Device->ExtraData; 1.345 + ALCcontext *current = alcGetCurrentContext(); 1.346 + 1.347 + ALuint i; 1.348 + for (i = 1; i < data->numContexts; i++){ 1.349 + syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); 1.350 + } 1.351 + 1.352 + if ((uint) samples > Device->UpdateSize){ 1.353 + printf("exceeding internal buffer size; dropping samples\n"); 1.354 + printf("requested %d; available %d\n", samples, Device->UpdateSize); 1.355 + samples = (int) Device->UpdateSize; 1.356 + } 1.357 + 1.358 + for (i = 0; i < data->numContexts; i++){ 1.359 + context_data *ctxData = data->contexts[i]; 1.360 + ALCcontext *ctx = ctxData->ctx; 1.361 + alcMakeContextCurrent(ctx); 1.362 + limitContext(Device, ctx); 1.363 + swapInContext(Device, ctxData); 1.364 + aluMixData(Device, ctxData->renderBuffer, samples); 1.365 + saveContext(Device, ctxData); 1.366 + unLimitContext(Device); 1.367 + } 1.368 + alcMakeContextCurrent(current); 1.369 +} 1.370 + 1.371 + 1.372 +//////////////////// JNI Methods 1.373 + 1.374 +#include "com_aurellem_send_AudioSend.h" 1.375 + 1.376 +/* 1.377 + * Class: com_aurellem_send_AudioSend 1.378 + * Method: nstep 1.379 + * Signature: (JI)V 1.380 + */ 1.381 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep 1.382 +(JNIEnv *env, jclass clazz, jlong device, jint samples){ 1.383 + UNUSED(env);UNUSED(clazz);UNUSED(device); 1.384 + renderData((ALCdevice*)((intptr_t)device), samples); 1.385 +} 1.386 + 1.387 +/* 1.388 + * Class: com_aurellem_send_AudioSend 1.389 + * Method: ngetSamples 1.390 + * Signature: (JLjava/nio/ByteBuffer;III)V 1.391 + */ 1.392 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples 1.393 +(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, 1.394 + jint samples, jint n){ 1.395 + UNUSED(clazz); 1.396 + 1.397 + ALvoid *buffer_address = 1.398 + ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); 1.399 + ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); 1.400 + send_data *data = (send_data*)recorder->ExtraData; 1.401 + if ((ALuint)n > data->numContexts){return;} 1.402 + 1.403 + //printf("Want %d samples for listener %d\n", samples, n); 1.404 + //printf("Device's format type is %d bytes per sample,\n", 1.405 + // BytesFromDevFmt(recorder->FmtType)); 1.406 + //printf("and it has %d channels, making for %d requested bytes\n", 1.407 + // recorder->NumChan, 1.408 + // BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); 1.409 + 1.410 + memcpy(buffer_address, data->contexts[n]->renderBuffer, 1.411 + BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); 1.412 + //samples*sizeof(ALfloat)); 1.413 +} 1.414 + 1.415 +/* 1.416 + * Class: com_aurellem_send_AudioSend 1.417 + * Method: naddListener 1.418 + * Signature: (J)V 1.419 + */ 1.420 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener 1.421 +(JNIEnv *env, jclass clazz, jlong device){ 1.422 + UNUSED(env); UNUSED(clazz); 1.423 + //printf("creating new context via naddListener\n"); 1.424 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.425 + ALCcontext *new = alcCreateContext(Device, NULL); 1.426 + addContext(Device, new); 1.427 +} 1.428 + 1.429 +/* 1.430 + * Class: com_aurellem_send_AudioSend 1.431 + * Method: nsetNthListener3f 1.432 + * Signature: (IFFFJI)V 1.433 + */ 1.434 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f 1.435 + (JNIEnv *env, jclass clazz, jint param, 1.436 + jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ 1.437 + UNUSED(env);UNUSED(clazz); 1.438 + 1.439 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.440 + send_data *data = (send_data*)Device->ExtraData; 1.441 + 1.442 + ALCcontext *current = alcGetCurrentContext(); 1.443 + if ((ALuint)contextNum > data->numContexts){return;} 1.444 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.445 + alListener3f(param, v1, v2, v3); 1.446 + alcMakeContextCurrent(current); 1.447 +} 1.448 + 1.449 +/* 1.450 + * Class: com_aurellem_send_AudioSend 1.451 + * Method: nsetNthListenerf 1.452 + * Signature: (IFJI)V 1.453 + */ 1.454 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf 1.455 +(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, 1.456 + jint contextNum){ 1.457 + 1.458 + UNUSED(env);UNUSED(clazz); 1.459 + 1.460 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.461 + send_data *data = (send_data*)Device->ExtraData; 1.462 + 1.463 + ALCcontext *current = alcGetCurrentContext(); 1.464 + if ((ALuint)contextNum > data->numContexts){return;} 1.465 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.466 + alListenerf(param, v1); 1.467 + alcMakeContextCurrent(current); 1.468 +} 1.469 + 1.470 +/* 1.471 + * Class: com_aurellem_send_AudioSend 1.472 + * Method: ninitDevice 1.473 + * Signature: (J)V 1.474 + */ 1.475 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice 1.476 +(JNIEnv *env, jclass clazz, jlong device){ 1.477 + UNUSED(env);UNUSED(clazz); 1.478 + 1.479 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.480 + init(Device); 1.481 + 1.482 +} 1.483 + 1.484 + 1.485 +/* 1.486 + * Class: com_aurellem_send_AudioSend 1.487 + * Method: ngetAudioFormat 1.488 + * Signature: (J)Ljavax/sound/sampled/AudioFormat; 1.489 + */ 1.490 +JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat 1.491 +(JNIEnv *env, jclass clazz, jlong device){ 1.492 + UNUSED(clazz); 1.493 + jclass AudioFormatClass = 1.494 + (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); 1.495 + jmethodID AudioFormatConstructor = 1.496 + (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V"); 1.497 + 1.498 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.499 + 1.500 + //float frequency 1.501 + 1.502 + int isSigned; 1.503 + switch (Device->FmtType) 1.504 + { 1.505 + case DevFmtUByte: 1.506 + case DevFmtUShort: isSigned = 0; break; 1.507 + default : isSigned = 1; 1.508 + } 1.509 + float frequency = Device->Frequency; 1.510 + int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); 1.511 + int channels = Device->NumChan; 1.512 + 1.513 + 1.514 + //printf("freq = %f, bpf = %d, channels = %d, signed? = %d\n", 1.515 + // frequency, bitsPerFrame, channels, isSigned); 1.516 + 1.517 + jobject format = (*env)-> 1.518 + NewObject( 1.519 + env,AudioFormatClass,AudioFormatConstructor, 1.520 + frequency, 1.521 + bitsPerFrame, 1.522 + channels, 1.523 + isSigned, 1.524 + 0); 1.525 + return format; 1.526 +} 1.527 + 1.528 + 1.529 + 1.530 +//////////////////// Device Initilization / Management 1.531 + 1.532 +static const ALCchar sendDevice[] = "Multiple Audio Send"; 1.533 + 1.534 +static ALCboolean send_open_playback(ALCdevice *device, 1.535 + const ALCchar *deviceName) 1.536 +{ 1.537 + send_data *data; 1.538 + // stop any buffering for stdout, so that I can 1.539 + // see the printf statements in my terminal immediatley 1.540 + setbuf(stdout, NULL); 1.541 + 1.542 + if(!deviceName) 1.543 + deviceName = sendDevice; 1.544 + else if(strcmp(deviceName, sendDevice) != 0) 1.545 + return ALC_FALSE; 1.546 + data = (send_data*)calloc(1, sizeof(*data)); 1.547 + device->szDeviceName = strdup(deviceName); 1.548 + device->ExtraData = data; 1.549 + return ALC_TRUE; 1.550 +} 1.551 + 1.552 +static void send_close_playback(ALCdevice *device) 1.553 +{ 1.554 + send_data *data = (send_data*)device->ExtraData; 1.555 + alcMakeContextCurrent(NULL); 1.556 + ALuint i; 1.557 + // Destroy all slave contexts. LWJGL will take care of 1.558 + // its own context. 1.559 + for (i = 1; i < data->numContexts; i++){ 1.560 + context_data *ctxData = data->contexts[i]; 1.561 + alcDestroyContext(ctxData->ctx); 1.562 + free(ctxData->renderBuffer); 1.563 + free(ctxData); 1.564 + } 1.565 + free(data); 1.566 + device->ExtraData = NULL; 1.567 +} 1.568 + 1.569 +static ALCboolean send_reset_playback(ALCdevice *device) 1.570 +{ 1.571 + SetDefaultWFXChannelOrder(device); 1.572 + return ALC_TRUE; 1.573 +} 1.574 + 1.575 +static void send_stop_playback(ALCdevice *Device){ 1.576 + UNUSED(Device); 1.577 +} 1.578 + 1.579 +static const BackendFuncs send_funcs = { 1.580 + send_open_playback, 1.581 + send_close_playback, 1.582 + send_reset_playback, 1.583 + send_stop_playback, 1.584 + NULL, 1.585 + NULL, /* These would be filled with functions to */ 1.586 + NULL, /* handle capturing audio if we we into that */ 1.587 + NULL, /* sort of thing... */ 1.588 + NULL, 1.589 + NULL 1.590 +}; 1.591 + 1.592 +ALCboolean alc_send_init(BackendFuncs *func_list){ 1.593 + *func_list = send_funcs; 1.594 + return ALC_TRUE; 1.595 +} 1.596 + 1.597 +void alc_send_deinit(void){} 1.598 + 1.599 +void alc_send_probe(enum DevProbe type) 1.600 +{ 1.601 + switch(type) 1.602 + { 1.603 + case DEVICE_PROBE: 1.604 + AppendDeviceList(sendDevice); 1.605 + break; 1.606 + case ALL_DEVICE_PROBE: 1.607 + AppendAllDeviceList(sendDevice); 1.608 + break; 1.609 + case CAPTURE_DEVICE_PROBE: 1.610 + break; 1.611 + } 1.612 +} 1.613 +#+end_src 1.614 + 1.615 + 1.616 + 1.617 + 1.618 + 1.619 + 1.620 + 1.621 + 1.622 +#+srcname: ears 1.623 #+begin_src clojure 1.624 -(ns body.ear) 1.625 +(ns cortex.hearing) 1.626 (use 'cortex.world) 1.627 (use 'cortex.import) 1.628 (use 'clojure.contrib.def) 1.629 @@ -41,21 +633,7 @@ 1.630 (import javax.swing.ImageIcon) 1.631 (import javax.swing.JOptionPane) 1.632 (import java.awt.image.ImageObserver) 1.633 -#+end_src 1.634 1.635 -JMonkeyEngine3's audio system works as follows: 1.636 -first, an appropiate audio renderer is created during initialization 1.637 -and depending on the context. On my computer, this is the 1.638 -LwjglAudioRenderer. 1.639 - 1.640 -The LwjglAudioRenderer sets a few internal state variables depending 1.641 -on what capabilities the audio system has. 1.642 - 1.643 -may very well need to make my own AudioRenderer 1.644 - 1.645 -#+srcname: ear-body-1 1.646 -#+begin_src clojure :results silent 1.647 -(in-ns 'body.ear) 1.648 (import 'com.jme3.capture.SoundProcessor) 1.649 1.650 1.651 @@ -72,7 +650,6 @@ 1.652 (.get audioSamples byte-array 0 numSamples) 1.653 (continuation 1.654 (vec byte-array))))))) 1.655 - 1.656 1.657 (defn add-ear 1.658 "add an ear to the world. The continuation function will be called 1.659 @@ -122,16 +699,19 @@ 1.660 1.661 1.662 1.663 +* Example 1.664 + 1.665 * COMMENT Code Generation 1.666 1.667 -#+begin_src clojure :tangle /home/r/cortex/src/body/ear.clj 1.668 -<<ear-header>> 1.669 -<<ear-body-1>> 1.670 +#+begin_src clojure :tangle ../../cortex/src/cortex/hearing.clj 1.671 +<<ears>> 1.672 #+end_src 1.673 1.674 - 1.675 -#+begin_src clojure :tangle /home/r/cortex/src/test/hearing.clj 1.676 +#+begin_src clojure :tangle ../../cortex/src/test/hearing.clj 1.677 <<test-hearing>> 1.678 #+end_src 1.679 1.680 1.681 +#+begin_src C :tangle ../Alc/backends/send.c 1.682 +<<send>> 1.683 +#+end_src