Mercurial > cortex
changeset 162:2cbdd7034c6c
moved ear into cortex
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sat, 04 Feb 2012 01:44:06 -0700 |
parents | e401dafa5966 |
children | 985c73659923 |
files | org/ear.org org/ideas.org org/movement.org org/test-creature.org |
diffstat | 4 files changed, 966 insertions(+), 10 deletions(-) [+] |
line wrap: on
line diff
1.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 1.2 +++ b/org/ear.org Sat Feb 04 01:44:06 2012 -0700 1.3 @@ -0,0 +1,952 @@ 1.4 +#+title: Simulated Sense of Hearing 1.5 +#+author: Robert McIntyre 1.6 +#+email: rlm@mit.edu 1.7 +#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 1.8 +#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI 1.9 +#+SETUPFILE: ../../aurellem/org/setup.org 1.10 +#+INCLUDE: ../../aurellem/org/level-0.org 1.11 +#+BABEL: :exports both :noweb yes :cache no :mkdirp yes 1.12 + 1.13 +* Hearing 1.14 + 1.15 +I want to be able to place ears in a similar manner to how I place 1.16 +the eyes. I want to be able to place ears in a unique spatial 1.17 +position, and receive as output at every tick the F.F.T. of whatever 1.18 +signals are happening at that point. 1.19 + 1.20 +Hearing is one of the more difficult senses to simulate, because there 1.21 +is less support for obtaining the actual sound data that is processed 1.22 +by jMonkeyEngine3. 1.23 + 1.24 +jMonkeyEngine's sound system works as follows: 1.25 + 1.26 + - jMonkeyEngine uses the =AppSettings= for the particular application 1.27 + to determine what sort of =AudioRenderer= should be used. 1.28 + - although some support is provided for multiple AudioRendering 1.29 + backends, jMonkeyEngine at the time of this writing will either 1.30 + pick no AudioRenderer at all, or the =LwjglAudioRenderer= 1.31 + - jMonkeyEngine tries to figure out what sort of system you're 1.32 + running and extracts the appropriate native libraries. 1.33 + - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game 1.34 + Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] 1.35 + - =OpenAL= calculates the 3D sound localization and feeds a stream of 1.36 + sound to any of various sound output devices with which it knows 1.37 + how to communicate. 1.38 + 1.39 +A consequence of this is that there's no way to access the actual 1.40 +sound data produced by =OpenAL=. Even worse, =OpenAL= only supports 1.41 +one /listener/, which normally isn't a problem for games, but becomes 1.42 +a problem when trying to make multiple AI creatures that can each hear 1.43 +the world from a different perspective. 1.44 + 1.45 +To make many AI creatures in jMonkeyEngine that can each hear the 1.46 +world from their own perspective, it is necessary to go all the way 1.47 +back to =OpenAL= and implement support for simulated hearing there. 1.48 + 1.49 +* Extending =OpenAL= 1.50 +** =OpenAL= Devices 1.51 + 1.52 +=OpenAL= goes to great lengths to support many different systems, all 1.53 +with different sound capabilities and interfaces. It accomplishes this 1.54 +difficult task by providing code for many different sound backends in 1.55 +pseudo-objects called /Devices/. There's a device for the Linux Open 1.56 +Sound System and the Advanced Linux Sound Architecture, there's one 1.57 +for Direct Sound on Windows, there's even one for Solaris. =OpenAL= 1.58 +solves the problem of platform independence by providing all these 1.59 +Devices. 1.60 + 1.61 +Wrapper libraries such as LWJGL are free to examine the system on 1.62 +which they are running and then select an appropriate device for that 1.63 +system. 1.64 + 1.65 +There are also a few "special" devices that don't interface with any 1.66 +particular system. These include the Null Device, which doesn't do 1.67 +anything, and the Wave Device, which writes whatever sound it receives 1.68 +to a file, if everything has been set up correctly when configuring 1.69 +=OpenAL=. 1.70 + 1.71 +Actual mixing of the sound data happens in the Devices, and they are 1.72 +the only point in the sound rendering process where this data is 1.73 +available. 1.74 + 1.75 +Therefore, in order to support multiple listeners, and get the sound 1.76 +data in a form that the AIs can use, it is necessary to create a new 1.77 +Device, which supports this features. 1.78 + 1.79 +** The Send Device 1.80 +Adding a device to OpenAL is rather tricky -- there are five separate 1.81 +files in the =OpenAL= source tree that must be modified to do so. I've 1.82 +documented this process [[./add-new-device.org][here]] for anyone who is interested. 1.83 + 1.84 + 1.85 +Onward to that actual Device! 1.86 + 1.87 +again, my objectives are: 1.88 + 1.89 + - Support Multiple Listeners from jMonkeyEngine3 1.90 + - Get access to the rendered sound data for further processing from 1.91 + clojure. 1.92 + 1.93 +** =send.c= 1.94 + 1.95 +** Header 1.96 +#+name: send-header 1.97 +#+begin_src C 1.98 +#include "config.h" 1.99 +#include <stdlib.h> 1.100 +#include "alMain.h" 1.101 +#include "AL/al.h" 1.102 +#include "AL/alc.h" 1.103 +#include "alSource.h" 1.104 +#include <jni.h> 1.105 + 1.106 +//////////////////// Summary 1.107 + 1.108 +struct send_data; 1.109 +struct context_data; 1.110 + 1.111 +static void addContext(ALCdevice *, ALCcontext *); 1.112 +static void syncContexts(ALCcontext *master, ALCcontext *slave); 1.113 +static void syncSources(ALsource *master, ALsource *slave, 1.114 + ALCcontext *masterCtx, ALCcontext *slaveCtx); 1.115 + 1.116 +static void syncSourcei(ALuint master, ALuint slave, 1.117 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.118 +static void syncSourcef(ALuint master, ALuint slave, 1.119 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.120 +static void syncSource3f(ALuint master, ALuint slave, 1.121 + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); 1.122 + 1.123 +static void swapInContext(ALCdevice *, struct context_data *); 1.124 +static void saveContext(ALCdevice *, struct context_data *); 1.125 +static void limitContext(ALCdevice *, ALCcontext *); 1.126 +static void unLimitContext(ALCdevice *); 1.127 + 1.128 +static void init(ALCdevice *); 1.129 +static void renderData(ALCdevice *, int samples); 1.130 + 1.131 +#define UNUSED(x) (void)(x) 1.132 +#+end_src 1.133 + 1.134 +The main idea behind the Send device is to take advantage of the fact 1.135 +that LWJGL only manages one /context/ when using OpenAL. A /context/ 1.136 +is like a container that holds samples and keeps track of where the 1.137 +listener is. In order to support multiple listeners, the Send device 1.138 +identifies the LWJGL context as the master context, and creates any 1.139 +number of slave contexts to represent additional listeners. Every 1.140 +time the device renders sound, it synchronizes every source from the 1.141 +master LWJGL context to the slave contexts. Then, it renders each 1.142 +context separately, using a different listener for each one. The 1.143 +rendered sound is made available via JNI to jMonkeyEngine. 1.144 + 1.145 +To recap, the process is: 1.146 + - Set the LWJGL context as "master" in the =init()= method. 1.147 + - Create any number of additional contexts via =addContext()= 1.148 + - At every call to =renderData()= sync the master context with the 1.149 + slave contexts with =syncContexts()= 1.150 + - =syncContexts()= calls =syncSources()= to sync all the sources 1.151 + which are in the master context. 1.152 + - =limitContext()= and =unLimitContext()= make it possible to render 1.153 + only one context at a time. 1.154 + 1.155 +** Necessary State 1.156 +#+name: send-state 1.157 +#+begin_src C 1.158 +//////////////////// State 1.159 + 1.160 +typedef struct context_data { 1.161 + ALfloat ClickRemoval[MAXCHANNELS]; 1.162 + ALfloat PendingClicks[MAXCHANNELS]; 1.163 + ALvoid *renderBuffer; 1.164 + ALCcontext *ctx; 1.165 +} context_data; 1.166 + 1.167 +typedef struct send_data { 1.168 + ALuint size; 1.169 + context_data **contexts; 1.170 + ALuint numContexts; 1.171 + ALuint maxContexts; 1.172 +} send_data; 1.173 +#+end_src 1.174 + 1.175 +Switching between contexts is not the normal operation of a Device, 1.176 +and one of the problems with doing so is that a Device normally keeps 1.177 +around a few pieces of state such as the =ClickRemoval= array above 1.178 +which will become corrupted if the contexts are not done in 1.179 +parallel. The solution is to create a copy of this normally global 1.180 +device state for each context, and copy it back and forth into and out 1.181 +of the actual device state whenever a context is rendered. 1.182 + 1.183 +** Synchronization Macros 1.184 +#+name: sync-macros 1.185 +#+begin_src C 1.186 +//////////////////// Context Creation / Synchronization 1.187 + 1.188 +#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ 1.189 + void NAME (ALuint sourceID1, ALuint sourceID2, \ 1.190 + ALCcontext *ctx1, ALCcontext *ctx2, \ 1.191 + ALenum param){ \ 1.192 + INIT_EXPR; \ 1.193 + ALCcontext *current = alcGetCurrentContext(); \ 1.194 + alcMakeContextCurrent(ctx1); \ 1.195 + GET_EXPR; \ 1.196 + alcMakeContextCurrent(ctx2); \ 1.197 + SET_EXPR; \ 1.198 + alcMakeContextCurrent(current); \ 1.199 + } 1.200 + 1.201 +#define MAKE_SYNC(NAME, TYPE, GET, SET) \ 1.202 + _MAKE_SYNC(NAME, \ 1.203 + TYPE value, \ 1.204 + GET(sourceID1, param, &value), \ 1.205 + SET(sourceID2, param, value)) 1.206 + 1.207 +#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ 1.208 + _MAKE_SYNC(NAME, \ 1.209 + TYPE value1; TYPE value2; TYPE value3;, \ 1.210 + GET(sourceID1, param, &value1, &value2, &value3), \ 1.211 + SET(sourceID2, param, value1, value2, value3)) 1.212 + 1.213 +MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); 1.214 +MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); 1.215 +MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); 1.216 +MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); 1.217 + 1.218 +#+end_src 1.219 + 1.220 +Setting the state of an =OpenAL= source is done with the =alSourcei=, 1.221 +=alSourcef=, =alSource3i=, and =alSource3f= functions. In order to 1.222 +completely synchronize two sources, it is necessary to use all of 1.223 +them. These macros help to condense the otherwise repetitive 1.224 +synchronization code involving these similar low-level =OpenAL= functions. 1.225 + 1.226 +** Source Synchronization 1.227 +#+name: sync-sources 1.228 +#+begin_src C 1.229 +void syncSources(ALsource *masterSource, ALsource *slaveSource, 1.230 + ALCcontext *masterCtx, ALCcontext *slaveCtx){ 1.231 + ALuint master = masterSource->source; 1.232 + ALuint slave = slaveSource->source; 1.233 + ALCcontext *current = alcGetCurrentContext(); 1.234 + 1.235 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); 1.236 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); 1.237 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); 1.238 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); 1.239 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); 1.240 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); 1.241 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); 1.242 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); 1.243 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); 1.244 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); 1.245 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); 1.246 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); 1.247 + syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); 1.248 + 1.249 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); 1.250 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); 1.251 + syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); 1.252 + 1.253 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); 1.254 + syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); 1.255 + 1.256 + alcMakeContextCurrent(masterCtx); 1.257 + ALint source_type; 1.258 + alGetSourcei(master, AL_SOURCE_TYPE, &source_type); 1.259 + 1.260 + // Only static sources are currently synchronized! 1.261 + if (AL_STATIC == source_type){ 1.262 + ALint master_buffer; 1.263 + ALint slave_buffer; 1.264 + alGetSourcei(master, AL_BUFFER, &master_buffer); 1.265 + alcMakeContextCurrent(slaveCtx); 1.266 + alGetSourcei(slave, AL_BUFFER, &slave_buffer); 1.267 + if (master_buffer != slave_buffer){ 1.268 + alSourcei(slave, AL_BUFFER, master_buffer); 1.269 + } 1.270 + } 1.271 + 1.272 + // Synchronize the state of the two sources. 1.273 + alcMakeContextCurrent(masterCtx); 1.274 + ALint masterState; 1.275 + ALint slaveState; 1.276 + 1.277 + alGetSourcei(master, AL_SOURCE_STATE, &masterState); 1.278 + alcMakeContextCurrent(slaveCtx); 1.279 + alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); 1.280 + 1.281 + if (masterState != slaveState){ 1.282 + switch (masterState){ 1.283 + case AL_INITIAL : alSourceRewind(slave); break; 1.284 + case AL_PLAYING : alSourcePlay(slave); break; 1.285 + case AL_PAUSED : alSourcePause(slave); break; 1.286 + case AL_STOPPED : alSourceStop(slave); break; 1.287 + } 1.288 + } 1.289 + // Restore whatever context was previously active. 1.290 + alcMakeContextCurrent(current); 1.291 +} 1.292 +#+end_src 1.293 +This function is long because it has to exhaustively go through all the 1.294 +possible state that a source can have and make sure that it is the 1.295 +same between the master and slave sources. I'd like to take this 1.296 +moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very 1.297 +good description of =OpenAL='s internals. 1.298 + 1.299 +** Context Synchronization 1.300 +#+name: sync-contexts 1.301 +#+begin_src C 1.302 +void syncContexts(ALCcontext *master, ALCcontext *slave){ 1.303 + /* If there aren't sufficient sources in slave to mirror 1.304 + the sources in master, create them. */ 1.305 + ALCcontext *current = alcGetCurrentContext(); 1.306 + 1.307 + UIntMap *masterSourceMap = &(master->SourceMap); 1.308 + UIntMap *slaveSourceMap = &(slave->SourceMap); 1.309 + ALuint numMasterSources = masterSourceMap->size; 1.310 + ALuint numSlaveSources = slaveSourceMap->size; 1.311 + 1.312 + alcMakeContextCurrent(slave); 1.313 + if (numSlaveSources < numMasterSources){ 1.314 + ALuint numMissingSources = numMasterSources - numSlaveSources; 1.315 + ALuint newSources[numMissingSources]; 1.316 + alGenSources(numMissingSources, newSources); 1.317 + } 1.318 + 1.319 + /* Now, slave is guaranteed to have at least as many sources 1.320 + as master. Sync each source from master to the corresponding 1.321 + source in slave. */ 1.322 + int i; 1.323 + for(i = 0; i < masterSourceMap->size; i++){ 1.324 + syncSources((ALsource*)masterSourceMap->array[i].value, 1.325 + (ALsource*)slaveSourceMap->array[i].value, 1.326 + master, slave); 1.327 + } 1.328 + alcMakeContextCurrent(current); 1.329 +} 1.330 +#+end_src 1.331 + 1.332 +Most of the hard work in Context Synchronization is done in 1.333 +=syncSources()=. The only thing that =syncContexts()= has to worry 1.334 +about is automatically creating new sources whenever a slave context 1.335 +does not have the same number of sources as the master context. 1.336 + 1.337 +** Context Creation 1.338 +#+name: context-creation 1.339 +#+begin_src C 1.340 +static void addContext(ALCdevice *Device, ALCcontext *context){ 1.341 + send_data *data = (send_data*)Device->ExtraData; 1.342 + // expand array if necessary 1.343 + if (data->numContexts >= data->maxContexts){ 1.344 + ALuint newMaxContexts = data->maxContexts*2 + 1; 1.345 + data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); 1.346 + data->maxContexts = newMaxContexts; 1.347 + } 1.348 + // create context_data and add it to the main array 1.349 + context_data *ctxData; 1.350 + ctxData = (context_data*)calloc(1, sizeof(*ctxData)); 1.351 + ctxData->renderBuffer = 1.352 + malloc(BytesFromDevFmt(Device->FmtType) * 1.353 + Device->NumChan * Device->UpdateSize); 1.354 + ctxData->ctx = context; 1.355 + 1.356 + data->contexts[data->numContexts] = ctxData; 1.357 + data->numContexts++; 1.358 +} 1.359 +#+end_src 1.360 + 1.361 +Here, the slave context is created, and it's data is stored in the 1.362 +device-wide =ExtraData= structure. The =renderBuffer= that is created 1.363 +here is where the rendered sound samples for this slave context will 1.364 +eventually go. 1.365 + 1.366 +** Context Switching 1.367 +#+name: context-switching 1.368 +#+begin_src C 1.369 +//////////////////// Context Switching 1.370 + 1.371 +/* A device brings along with it two pieces of state 1.372 + * which have to be swapped in and out with each context. 1.373 + */ 1.374 +static void swapInContext(ALCdevice *Device, context_data *ctxData){ 1.375 + memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.376 + memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.377 +} 1.378 + 1.379 +static void saveContext(ALCdevice *Device, context_data *ctxData){ 1.380 + memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); 1.381 + memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); 1.382 +} 1.383 + 1.384 +static ALCcontext **currentContext; 1.385 +static ALuint currentNumContext; 1.386 + 1.387 +/* By default, all contexts are rendered at once for each call to aluMixData. 1.388 + * This function uses the internals of the ALCdevice struct to temporally 1.389 + * cause aluMixData to only render the chosen context. 1.390 + */ 1.391 +static void limitContext(ALCdevice *Device, ALCcontext *ctx){ 1.392 + currentContext = Device->Contexts; 1.393 + currentNumContext = Device->NumContexts; 1.394 + Device->Contexts = &ctx; 1.395 + Device->NumContexts = 1; 1.396 +} 1.397 + 1.398 +static void unLimitContext(ALCdevice *Device){ 1.399 + Device->Contexts = currentContext; 1.400 + Device->NumContexts = currentNumContext; 1.401 +} 1.402 +#+end_src 1.403 + 1.404 +=OpenAL= normally renders all Contexts in parallel, outputting the 1.405 +whole result to the buffer. It does this by iterating over the 1.406 +Device->Contexts array and rendering each context to the buffer in 1.407 +turn. By temporally setting Device->NumContexts to 1 and adjusting 1.408 +the Device's context list to put the desired context-to-be-rendered 1.409 +into position 0, we can get trick =OpenAL= into rendering each slave 1.410 +context separate from all the others. 1.411 + 1.412 +** Main Device Loop 1.413 +#+name: main-loop 1.414 +#+begin_src C 1.415 +//////////////////// Main Device Loop 1.416 + 1.417 +/* Establish the LWJGL context as the master context, which will 1.418 + * be synchronized to all the slave contexts 1.419 + */ 1.420 +static void init(ALCdevice *Device){ 1.421 + ALCcontext *masterContext = alcGetCurrentContext(); 1.422 + addContext(Device, masterContext); 1.423 +} 1.424 + 1.425 + 1.426 +static void renderData(ALCdevice *Device, int samples){ 1.427 + if(!Device->Connected){return;} 1.428 + send_data *data = (send_data*)Device->ExtraData; 1.429 + ALCcontext *current = alcGetCurrentContext(); 1.430 + 1.431 + ALuint i; 1.432 + for (i = 1; i < data->numContexts; i++){ 1.433 + syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); 1.434 + } 1.435 + 1.436 + if ((ALuint) samples > Device->UpdateSize){ 1.437 + printf("exceeding internal buffer size; dropping samples\n"); 1.438 + printf("requested %d; available %d\n", samples, Device->UpdateSize); 1.439 + samples = (int) Device->UpdateSize; 1.440 + } 1.441 + 1.442 + for (i = 0; i < data->numContexts; i++){ 1.443 + context_data *ctxData = data->contexts[i]; 1.444 + ALCcontext *ctx = ctxData->ctx; 1.445 + alcMakeContextCurrent(ctx); 1.446 + limitContext(Device, ctx); 1.447 + swapInContext(Device, ctxData); 1.448 + aluMixData(Device, ctxData->renderBuffer, samples); 1.449 + saveContext(Device, ctxData); 1.450 + unLimitContext(Device); 1.451 + } 1.452 + alcMakeContextCurrent(current); 1.453 +} 1.454 +#+end_src 1.455 + 1.456 +The main loop synchronizes the master LWJGL context with all the slave 1.457 +contexts, then walks each context, rendering just that context to it's 1.458 +audio-sample storage buffer. 1.459 + 1.460 +** JNI Methods 1.461 + 1.462 +At this point, we have the ability to create multiple listeners by 1.463 +using the master/slave context trick, and the rendered audio data is 1.464 +waiting patiently in internal buffers, one for each listener. We need 1.465 +a way to transport this information to Java, and also a way to drive 1.466 +this device from Java. The following JNI interface code is inspired 1.467 +by the way LWJGL interfaces with =OpenAL=. 1.468 + 1.469 +*** step 1.470 +#+name: jni-step 1.471 +#+begin_src C 1.472 +//////////////////// JNI Methods 1.473 + 1.474 +#include "com_aurellem_send_AudioSend.h" 1.475 + 1.476 +/* 1.477 + * Class: com_aurellem_send_AudioSend 1.478 + * Method: nstep 1.479 + * Signature: (JI)V 1.480 + */ 1.481 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep 1.482 +(JNIEnv *env, jclass clazz, jlong device, jint samples){ 1.483 + UNUSED(env);UNUSED(clazz);UNUSED(device); 1.484 + renderData((ALCdevice*)((intptr_t)device), samples); 1.485 +} 1.486 +#+end_src 1.487 +This device, unlike most of the other devices in =OpenAL=, does not 1.488 +render sound unless asked. This enables the system to slow down or 1.489 +speed up depending on the needs of the AIs who are using it to 1.490 +listen. If the device tried to render samples in real-time, a 1.491 +complicated AI whose mind takes 100 seconds of computer time to 1.492 +simulate 1 second of AI-time would miss almost all of the sound in 1.493 +its environment. 1.494 + 1.495 + 1.496 +*** getSamples 1.497 +#+name: jni-get-samples 1.498 +#+begin_src C 1.499 +/* 1.500 + * Class: com_aurellem_send_AudioSend 1.501 + * Method: ngetSamples 1.502 + * Signature: (JLjava/nio/ByteBuffer;III)V 1.503 + */ 1.504 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples 1.505 +(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, 1.506 + jint samples, jint n){ 1.507 + UNUSED(clazz); 1.508 + 1.509 + ALvoid *buffer_address = 1.510 + ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); 1.511 + ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); 1.512 + send_data *data = (send_data*)recorder->ExtraData; 1.513 + if ((ALuint)n > data->numContexts){return;} 1.514 + memcpy(buffer_address, data->contexts[n]->renderBuffer, 1.515 + BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); 1.516 +} 1.517 +#+end_src 1.518 + 1.519 +This is the transport layer between C and Java that will eventually 1.520 +allow us to access rendered sound data from clojure. 1.521 + 1.522 +*** Listener Management 1.523 + 1.524 +=addListener=, =setNthListenerf=, and =setNthListener3f= are 1.525 +necessary to change the properties of any listener other than the 1.526 +master one, since only the listener of the current active context is 1.527 +affected by the normal =OpenAL= listener calls. 1.528 +#+name: listener-manage 1.529 +#+begin_src C 1.530 +/* 1.531 + * Class: com_aurellem_send_AudioSend 1.532 + * Method: naddListener 1.533 + * Signature: (J)V 1.534 + */ 1.535 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener 1.536 +(JNIEnv *env, jclass clazz, jlong device){ 1.537 + UNUSED(env); UNUSED(clazz); 1.538 + //printf("creating new context via naddListener\n"); 1.539 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.540 + ALCcontext *new = alcCreateContext(Device, NULL); 1.541 + addContext(Device, new); 1.542 +} 1.543 + 1.544 +/* 1.545 + * Class: com_aurellem_send_AudioSend 1.546 + * Method: nsetNthListener3f 1.547 + * Signature: (IFFFJI)V 1.548 + */ 1.549 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f 1.550 + (JNIEnv *env, jclass clazz, jint param, 1.551 + jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ 1.552 + UNUSED(env);UNUSED(clazz); 1.553 + 1.554 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.555 + send_data *data = (send_data*)Device->ExtraData; 1.556 + 1.557 + ALCcontext *current = alcGetCurrentContext(); 1.558 + if ((ALuint)contextNum > data->numContexts){return;} 1.559 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.560 + alListener3f(param, v1, v2, v3); 1.561 + alcMakeContextCurrent(current); 1.562 +} 1.563 + 1.564 +/* 1.565 + * Class: com_aurellem_send_AudioSend 1.566 + * Method: nsetNthListenerf 1.567 + * Signature: (IFJI)V 1.568 + */ 1.569 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf 1.570 +(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, 1.571 + jint contextNum){ 1.572 + 1.573 + UNUSED(env);UNUSED(clazz); 1.574 + 1.575 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.576 + send_data *data = (send_data*)Device->ExtraData; 1.577 + 1.578 + ALCcontext *current = alcGetCurrentContext(); 1.579 + if ((ALuint)contextNum > data->numContexts){return;} 1.580 + alcMakeContextCurrent(data->contexts[contextNum]->ctx); 1.581 + alListenerf(param, v1); 1.582 + alcMakeContextCurrent(current); 1.583 +} 1.584 +#+end_src 1.585 + 1.586 +*** Initialization 1.587 +=initDevice= is called from the Java side after LWJGL has created its 1.588 +context, and before any calls to =addListener=. It establishes the 1.589 +LWJGL context as the master context. 1.590 + 1.591 +=getAudioFormat= is a convenience function that uses JNI to build up a 1.592 +=javax.sound.sampled.AudioFormat= object from data in the Device. This 1.593 +way, there is no ambiguity about what the bits created by =step= and 1.594 +returned by =getSamples= mean. 1.595 +#+name: jni-init 1.596 +#+begin_src C 1.597 +/* 1.598 + * Class: com_aurellem_send_AudioSend 1.599 + * Method: ninitDevice 1.600 + * Signature: (J)V 1.601 + */ 1.602 +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice 1.603 +(JNIEnv *env, jclass clazz, jlong device){ 1.604 + UNUSED(env);UNUSED(clazz); 1.605 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.606 + init(Device); 1.607 +} 1.608 + 1.609 +/* 1.610 + * Class: com_aurellem_send_AudioSend 1.611 + * Method: ngetAudioFormat 1.612 + * Signature: (J)Ljavax/sound/sampled/AudioFormat; 1.613 + */ 1.614 +JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat 1.615 +(JNIEnv *env, jclass clazz, jlong device){ 1.616 + UNUSED(clazz); 1.617 + jclass AudioFormatClass = 1.618 + (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); 1.619 + jmethodID AudioFormatConstructor = 1.620 + (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V"); 1.621 + 1.622 + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); 1.623 + int isSigned; 1.624 + switch (Device->FmtType) 1.625 + { 1.626 + case DevFmtUByte: 1.627 + case DevFmtUShort: isSigned = 0; break; 1.628 + default : isSigned = 1; 1.629 + } 1.630 + float frequency = Device->Frequency; 1.631 + int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); 1.632 + int channels = Device->NumChan; 1.633 + jobject format = (*env)-> 1.634 + NewObject( 1.635 + env,AudioFormatClass,AudioFormatConstructor, 1.636 + frequency, 1.637 + bitsPerFrame, 1.638 + channels, 1.639 + isSigned, 1.640 + 0); 1.641 + return format; 1.642 +} 1.643 +#+end_src 1.644 + 1.645 +** Boring Device management stuff 1.646 +This code is more-or-less copied verbatim from the other =OpenAL= 1.647 +backends. It's the basis for =OpenAL='s primitive object system. 1.648 +#+name: device-init 1.649 +#+begin_src C 1.650 +//////////////////// Device Initialization / Management 1.651 + 1.652 +static const ALCchar sendDevice[] = "Multiple Audio Send"; 1.653 + 1.654 +static ALCboolean send_open_playback(ALCdevice *device, 1.655 + const ALCchar *deviceName) 1.656 +{ 1.657 + send_data *data; 1.658 + // stop any buffering for stdout, so that I can 1.659 + // see the printf statements in my terminal immediately 1.660 + setbuf(stdout, NULL); 1.661 + 1.662 + if(!deviceName) 1.663 + deviceName = sendDevice; 1.664 + else if(strcmp(deviceName, sendDevice) != 0) 1.665 + return ALC_FALSE; 1.666 + data = (send_data*)calloc(1, sizeof(*data)); 1.667 + device->szDeviceName = strdup(deviceName); 1.668 + device->ExtraData = data; 1.669 + return ALC_TRUE; 1.670 +} 1.671 + 1.672 +static void send_close_playback(ALCdevice *device) 1.673 +{ 1.674 + send_data *data = (send_data*)device->ExtraData; 1.675 + alcMakeContextCurrent(NULL); 1.676 + ALuint i; 1.677 + // Destroy all slave contexts. LWJGL will take care of 1.678 + // its own context. 1.679 + for (i = 1; i < data->numContexts; i++){ 1.680 + context_data *ctxData = data->contexts[i]; 1.681 + alcDestroyContext(ctxData->ctx); 1.682 + free(ctxData->renderBuffer); 1.683 + free(ctxData); 1.684 + } 1.685 + free(data); 1.686 + device->ExtraData = NULL; 1.687 +} 1.688 + 1.689 +static ALCboolean send_reset_playback(ALCdevice *device) 1.690 +{ 1.691 + SetDefaultWFXChannelOrder(device); 1.692 + return ALC_TRUE; 1.693 +} 1.694 + 1.695 +static void send_stop_playback(ALCdevice *Device){ 1.696 + UNUSED(Device); 1.697 +} 1.698 + 1.699 +static const BackendFuncs send_funcs = { 1.700 + send_open_playback, 1.701 + send_close_playback, 1.702 + send_reset_playback, 1.703 + send_stop_playback, 1.704 + NULL, 1.705 + NULL, /* These would be filled with functions to */ 1.706 + NULL, /* handle capturing audio if we we into that */ 1.707 + NULL, /* sort of thing... */ 1.708 + NULL, 1.709 + NULL 1.710 +}; 1.711 + 1.712 +ALCboolean alc_send_init(BackendFuncs *func_list){ 1.713 + *func_list = send_funcs; 1.714 + return ALC_TRUE; 1.715 +} 1.716 + 1.717 +void alc_send_deinit(void){} 1.718 + 1.719 +void alc_send_probe(enum DevProbe type) 1.720 +{ 1.721 + switch(type) 1.722 + { 1.723 + case DEVICE_PROBE: 1.724 + AppendDeviceList(sendDevice); 1.725 + break; 1.726 + case ALL_DEVICE_PROBE: 1.727 + AppendAllDeviceList(sendDevice); 1.728 + break; 1.729 + case CAPTURE_DEVICE_PROBE: 1.730 + break; 1.731 + } 1.732 +} 1.733 +#+end_src 1.734 + 1.735 +* The Java interface, =AudioSend= 1.736 + 1.737 +The Java interface to the Send Device follows naturally from the JNI 1.738 +definitions. It is included here for completeness. The only thing here 1.739 +of note is the =deviceID=. This is available from LWJGL, but to only 1.740 +way to get it is reflection. Unfortunately, there is no other way to 1.741 +control the Send device than to obtain a pointer to it. 1.742 + 1.743 +#+include: "../java/src/com/aurellem/send/AudioSend.java" src java :exports code 1.744 + 1.745 +* Finally, Ears in clojure! 1.746 + 1.747 +Now that the infrastructure is complete (modulo a few patches to 1.748 +jMonkeyEngine3 to support accessing this modified version of =OpenAL= 1.749 +that are not worth discussing), the clojure ear abstraction is rather 1.750 +simple. Just as there were =SceneProcessors= for vision, there are 1.751 +now =SoundProcessors= for hearing. 1.752 + 1.753 +#+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java 1.754 + 1.755 +#+name: ears 1.756 +#+begin_src clojure 1.757 +(ns cortex.hearing 1.758 + "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple 1.759 + listeners at different positions in the same world. Passes vectors 1.760 + of floats in the range [-1.0 -- 1.0] in PCM format to any arbitrary 1.761 + function." 1.762 + {:author "Robert McIntyre"} 1.763 + (:use (cortex world util sense)) 1.764 + (:import java.nio.ByteBuffer) 1.765 + (:import org.tritonus.share.sampled.FloatSampleTools) 1.766 + (:import com.aurellem.capture.audio.SoundProcessor) 1.767 + (:import javax.sound.sampled.AudioFormat)) 1.768 + 1.769 +(cortex.import/mega-import-jme3) 1.770 + 1.771 +(defn sound-processor 1.772 + "Deals with converting ByteBuffers into Vectors of floats so that 1.773 + the continuation functions can be defined in terms of immutable 1.774 + stuff." 1.775 + [continuation] 1.776 + (proxy [SoundProcessor] [] 1.777 + (cleanup []) 1.778 + (process 1.779 + [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat] 1.780 + (let [bytes (byte-array numSamples) 1.781 + num-floats (/ numSamples (.getFrameSize audioFormat)) 1.782 + floats (float-array num-floats)] 1.783 + (.get audioSamples bytes 0 numSamples) 1.784 + (FloatSampleTools/byte2floatInterleaved 1.785 + bytes 0 floats 0 num-floats audioFormat) 1.786 + (continuation 1.787 + (vec floats)))))) 1.788 + 1.789 + 1.790 + 1.791 + 1.792 +;; Ears work the same way as vision. 1.793 + 1.794 +;; (hearing creature) will return [init-functions 1.795 +;; sensor-functions]. The init functions each take the world and 1.796 +;; register a SoundProcessor that does foureier transforms on the 1.797 +;; incommong sound data, making it available to each sensor function. 1.798 + 1.799 +(defn creature-ears 1.800 + "Return the children of the creature's \"ears\" node." 1.801 + ;;dylan 1.802 + ;;"The ear nodes which are children of the \"ears\" node in the 1.803 + ;;creature." 1.804 + [#^Node creature] 1.805 + (if-let [ear-node (.getChild creature "ears")] 1.806 + (seq (.getChildren ear-node)) 1.807 + (do (println-repl "could not find ears node") []))) 1.808 + 1.809 + 1.810 +;;dylan (defn follow-sense, adjoin-sense, attach-stimuli, 1.811 +;;anchor-qualia, augment-organ, with-organ 1.812 + 1.813 + 1.814 +(defn update-listener-velocity 1.815 + "Update the listener's velocity every update loop." 1.816 + [#^Spatial obj #^Listener lis] 1.817 + (let [old-position (atom (.getLocation lis))] 1.818 + (.addControl 1.819 + obj 1.820 + (proxy [AbstractControl] [] 1.821 + (controlUpdate [tpf] 1.822 + (let [new-position (.getLocation lis)] 1.823 + (.setVelocity 1.824 + lis 1.825 + (.mult (.subtract new-position @old-position) 1.826 + (float (/ tpf)))) 1.827 + (reset! old-position new-position))) 1.828 + (controlRender [_ _]))))) 1.829 + 1.830 +(import com.aurellem.capture.audio.AudioSendRenderer) 1.831 + 1.832 +(defn attach-ear 1.833 + [#^Application world #^Node creature #^Spatial ear continuation] 1.834 + (let [target (closest-node creature ear) 1.835 + lis (Listener.) 1.836 + audio-renderer (.getAudioRenderer world) 1.837 + sp (sound-processor continuation)] 1.838 + (.setLocation lis (.getWorldTranslation ear)) 1.839 + (.setRotation lis (.getWorldRotation ear)) 1.840 + (bind-sense target lis) 1.841 + (update-listener-velocity target lis) 1.842 + (.addListener audio-renderer lis) 1.843 + (.registerSoundProcessor audio-renderer lis sp))) 1.844 + 1.845 +(defn enable-hearing 1.846 + [#^Node creature #^Spatial ear] 1.847 + (let [hearing-data (atom [])] 1.848 + [(fn [world] 1.849 + (attach-ear world creature ear 1.850 + (fn [data] 1.851 + (reset! hearing-data (vec data))))) 1.852 + [(fn [] 1.853 + (let [data @hearing-data 1.854 + topology 1.855 + (vec (map #(vector % 0) (range 0 (count data)))) 1.856 + scaled-data 1.857 + (vec 1.858 + (map 1.859 + #(rem (int (* 255 (/ (+ 1 %) 2))) 256) 1.860 + data))] 1.861 + [topology scaled-data])) 1.862 + ]])) 1.863 + 1.864 +(defn hearing 1.865 + [#^Node creature] 1.866 + (reduce 1.867 + (fn [[init-a senses-a] 1.868 + [init-b senses-b]] 1.869 + [(conj init-a init-b) 1.870 + (into senses-a senses-b)]) 1.871 + [[][]] 1.872 + (for [ear (creature-ears creature)] 1.873 + (enable-hearing creature ear)))) 1.874 + 1.875 + 1.876 +#+end_src 1.877 + 1.878 +* Example 1.879 + 1.880 +#+name: test-hearing 1.881 +#+begin_src clojure :results silent 1.882 +(ns cortex.test.hearing 1.883 + (:use (cortex world util hearing)) 1.884 + (:import (com.jme3.audio AudioNode Listener)) 1.885 + (:import com.jme3.scene.Node 1.886 + com.jme3.system.AppSettings)) 1.887 + 1.888 +(defn setup-fn [world] 1.889 + (let [listener (Listener.)] 1.890 + (add-ear world listener #(println-repl (nth % 0))))) 1.891 + 1.892 +(defn play-sound [node world value] 1.893 + (if (not value) 1.894 + (do 1.895 + (.playSource (.getAudioRenderer world) node)))) 1.896 + 1.897 +(defn test-basic-hearing [] 1.898 + (let [node1 (AudioNode. (asset-manager) "Sounds/pure.wav" false false)] 1.899 + (world 1.900 + (Node.) 1.901 + {"key-space" (partial play-sound node1)} 1.902 + setup-fn 1.903 + no-op))) 1.904 + 1.905 +(defn test-advanced-hearing 1.906 + "Testing hearing: 1.907 + You should see a blue sphere flying around several 1.908 + cubes. As the sphere approaches each cube, it turns 1.909 + green." 1.910 + [] 1.911 + (doto (com.aurellem.capture.examples.Advanced.) 1.912 + (.setSettings 1.913 + (doto (AppSettings. true) 1.914 + (.setAudioRenderer "Send"))) 1.915 + (.setShowSettings false) 1.916 + (.setPauseOnLostFocus false))) 1.917 + 1.918 +#+end_src 1.919 + 1.920 +This extremely basic program prints out the first sample it encounters 1.921 +at every time stamp. You can see the rendered sound being printed at 1.922 +the REPL. 1.923 + 1.924 + - As a bonus, this method of capturing audio for AI can also be used 1.925 + to capture perfect audio from a jMonkeyEngine application, for use 1.926 + in demos and the like. 1.927 + 1.928 + 1.929 +* COMMENT Code Generation 1.930 + 1.931 +#+begin_src clojure :tangle ../cortex/src/cortex/hearing.clj 1.932 +<<ears>> 1.933 +#+end_src 1.934 + 1.935 +#+begin_src clojure :tangle ../src/cortex/test/hearing.clj 1.936 +<<test-hearing>> 1.937 +#+end_src 1.938 + 1.939 +#+begin_src C :tangle ../../audio-send/Alc/backends/send.c 1.940 +<<send-header>> 1.941 +<<send-state>> 1.942 +<<sync-macros>> 1.943 +<<sync-sources>> 1.944 +<<sync-contexts>> 1.945 +<<context-creation>> 1.946 +<<context-switching>> 1.947 +<<main-loop>> 1.948 +<<jni-step>> 1.949 +<<jni-get-samples>> 1.950 +<<listener-manage>> 1.951 +<<jni-init>> 1.952 +<<device-init>> 1.953 +#+end_src 1.954 + 1.955 +
2.1 --- a/org/ideas.org Fri Feb 03 06:52:17 2012 -0700 2.2 +++ b/org/ideas.org Sat Feb 04 01:44:06 2012 -0700 2.3 @@ -96,3 +96,13 @@ 2.4 - [ ] show sensor maps in HUD display? -- 4 days 2.5 - [ ] show sensor maps in AWT display? -- 2 days 2.6 2.7 +* refactoring objectives 2.8 + - [ ] consistent, high-quality names 2.9 + - [ ] joint-creation function in the style of others, kill blender-creature 2.10 + - [ ] docstrings for every function 2.11 + - [ ] common image-loading code 2.12 + - [ ] refactor display/debug code 2.13 + 2.14 + 2.15 + 2.16 +for each sense,
3.1 --- a/org/movement.org Fri Feb 03 06:52:17 2012 -0700 3.2 +++ b/org/movement.org Sat Feb 04 01:44:06 2012 -0700 3.3 @@ -15,9 +15,6 @@ 3.4 3.5 (cortex.import/mega-import-jme3) 3.6 3.7 - 3.8 - 3.9 - 3.10 ;; here's how motor-control/ proprioception will work: Each muscle is 3.11 ;; defined by a 1-D array of numbers (the "motor pool") each of which 3.12 ;; represent muscle fibers. A muscle also has a scalar :strength 3.13 @@ -30,11 +27,6 @@ 3.14 ;; 1-D array to simulate the layout of actual human muscles, which are 3.15 ;; capable of more percise movements when exerting less force. 3.16 3.17 -;; I don't know how to encode proprioception, so for now, just return 3.18 -;; a function for each joint that returns a triplet of floats which 3.19 -;; represent relative roll, pitch, and yaw. Write display code for 3.20 -;; this though. 3.21 - 3.22 (defn muscle-fiber-values 3.23 "get motor pool strengths" 3.24 [#^BufferedImage image] 3.25 @@ -46,7 +38,6 @@ 3.26 0x0000FF 3.27 (.getRGB image x 0))))))) 3.28 3.29 - 3.30 (defn creature-muscles 3.31 "Return the children of the creature's \"muscles\" node." 3.32 [#^Node creature]
4.1 --- a/org/test-creature.org Fri Feb 03 06:52:17 2012 -0700 4.2 +++ b/org/test-creature.org Sat Feb 04 01:44:06 2012 -0700 4.3 @@ -50,7 +50,7 @@ 4.4 (require 'cortex.import) 4.5 (cortex.import/mega-import-jme3) 4.6 (use '(cortex world util body hearing touch vision sense 4.7 -proprioception movement)) 4.8 + proprioception movement)) 4.9 4.10 (rlm.rlm-commands/help) 4.11 (import java.awt.image.BufferedImage) 4.12 @@ -280,6 +280,9 @@ 4.13 ;; to the spatial. 4.14 4.15 4.16 +;;dylan (defn follow-sense, adjoin-sense, attach-stimuli, 4.17 +;;anchor-qualia, augment-organ, with-organ 4.18 + 4.19 (defn follow-test 4.20 "show a camera that stays in the same relative position to a blue cube." 4.21 []