# HG changeset patch # User Robert McIntyre # Date 1320347319 25200 # Node ID 19ff95c69cf5b5f34d408c8da410808341b7cf9b # Parent 63312ec4a2bf354235d2f3fcdbc70ad1fa1100ec moved send.c to org file diff -r 63312ec4a2bf -r 19ff95c69cf5 .hgignore --- a/.hgignore Mon Oct 31 08:01:08 2011 -0700 +++ b/.hgignore Thu Nov 03 12:08:39 2011 -0700 @@ -6,4 +6,5 @@ java/bin* java/dist* java/build/* -java/headers/* \ No newline at end of file +java/headers/* +Alc/backends/send.c \ No newline at end of file diff -r 63312ec4a2bf -r 19ff95c69cf5 Alc/backends/send.c --- a/Alc/backends/send.c Mon Oct 31 08:01:08 2011 -0700 +++ b/Alc/backends/send.c Thu Nov 03 12:08:39 2011 -0700 @@ -1,3 +1,4 @@ + #include "config.h" #include #include "alMain.h" @@ -304,15 +305,13 @@ ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); send_data *data = (send_data*)recorder->ExtraData; if ((ALuint)n > data->numContexts){return;} - //if ((uint) samples > data->size){ - // samples = (int) data->size; - //} - printf("Want %d samples for listener %d\n", samples, n); - printf("Device's format type is %d bytes per sample,\n", - BytesFromDevFmt(recorder->FmtType)); - printf("and it has %d channels, making for %d requested bytes\n", - recorder->NumChan, - BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); + + //printf("Want %d samples for listener %d\n", samples, n); + //printf("Device's format type is %d bytes per sample,\n", + // BytesFromDevFmt(recorder->FmtType)); + //printf("and it has %d channels, making for %d requested bytes\n", + // recorder->NumChan, + // BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); memcpy(buffer_address, data->contexts[n]->renderBuffer, BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); @@ -327,7 +326,7 @@ JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener (JNIEnv *env, jclass clazz, jlong device){ UNUSED(env); UNUSED(clazz); - printf("creating new context via naddListener\n"); + //printf("creating new context via naddListener\n"); ALCdevice *Device = (ALCdevice*) ((intptr_t)device); ALCcontext *new = alcCreateContext(Device, NULL); addContext(Device, new); @@ -418,8 +417,8 @@ int channels = Device->NumChan; - printf("freq = %f, bpf = %d, channels = %d, signed? = %d\n", - frequency, bitsPerFrame, channels, isSigned); + //printf("freq = %f, bpf = %d, channels = %d, signed? = %d\n", + // frequency, bitsPerFrame, channels, isSigned); jobject format = (*env)-> NewObject( @@ -517,6 +516,3 @@ break; } } - - - diff -r 63312ec4a2bf -r 19ff95c69cf5 org/ear.org --- a/org/ear.org Mon Oct 31 08:01:08 2011 -0700 +++ b/org/ear.org Thu Nov 03 12:08:39 2011 -0700 @@ -1,25 +1,617 @@ -#+title: The EARS! +#+title: Simulated Sense of Hearing #+author: Robert McIntyre #+email: rlm@mit.edu -#+MATHJAX: align:"left" mathml:t path:"../aurellem/src/MathJax/MathJax.js" -#+STYLE: +#+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3 +#+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI +#+SETUPFILE: ../../aurellem/org/setup.org +#+INCLUDE: ../../aurellem/org/level-0.org #+BABEL: :exports both :noweb yes :cache no :mkdirp yes -#+INCLUDE: ../aurellem/src/templates/level-0.org -#+description: Simulating multiple listeners and the sense of hearing in uMonkeyEngine3 -* Ears! +* Hearing I want to be able to place ears in a similiar manner to how I place the eyes. I want to be able to place ears in a unique spatial position, and recieve as output at every tick the FFT of whatever signals are happening at that point. -#+srcname: ear-header +Hearing is one of the more difficult senses to simulate, because there +is less support for obtaining the actual sound data that is processed +by jMonkeyEngine3. + +jMonkeyEngine's sound system works as follows: + + - jMonkeyEngine uese the =AppSettings= for the particular application + to determine what sort of =AudioRenderer= should be used. + - although some support is provided for multiple AudioRendering + backends, jMonkeyEngine at the time of this writing will either + pick no AudioRender at all, or the =LwjglAudioRenderer= + - jMonkeyEngine tries to figure out what sort of system you're + running and extracts the appropiate native libraries. + - the =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (lightweight java game + library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]] + - =OpenAL= calculates the 3D sound localization and feeds a stream of + sound to any of various sound output devices with which it knows + how to communicate. + +A consequence of this is that there's no way to access the actual +sound data produced by =OpenAL=. Even worse, =OpanAL= only supports +one /listener/, which normally isn't a problem for games, but becomes +a problem when trying to make multiple AI creatures that can each hear +the world from a different perspective. + +To make many AI creatures in jMonkeyEngine that can each hear the +world from their own perspective, it is necessary to go all the way +back to =OpenAL= and implement support for simulated hearing there. + +** =OpenAL= Devices + +=OpenAL= goes to great lengths to support many different systems, all +with different sound capabilities and interfaces. It acomplishes this +difficult task by providing code for many different sound backends in +pseudo-objects called /Devices/. There's a device for the Linux Open +Sound System and the Advanced Linxu Sound Architechture, there's one +for Direct Sound on Windows, there's even one for Solaris. =OpenAL= +solves the problem of platform independence by providing all these +Devices. + +Wrapper libraries such as LWJGL are free to examine the system on +which they are running and then select an appropiate device for that +system. + +There are also a few "special" devices that don't interface with any +particular system. These include the Null Device, which doesn't do +anything, and the Wave Device, which writes whatever sound it recieves +to a file, if everything has been set up correctly when configuring +=OpenAL=. + +Actual mixing of the sound data happens in the Devices, and they are +the only point in the sound rendering process where this data is +available. + +Therefore, in order to support multiple listeners, and get the sound +data in a form that the AIs can use, it is necessary to create a new +Device, which supports this features. + + +** The Send Device +Adding a device to OpenAL is rather tricky -- there are five separate +files in the =OpenAL= source tree that must be modified to do so. I've +documented this process [[./add-new-device.org][here]] for anyone who is interested. + +#+srcname: send +#+begin_src C +#include "config.h" +#include +#include "alMain.h" +#include "AL/al.h" +#include "AL/alc.h" +#include "alSource.h" +#include + +//////////////////// Summary + +struct send_data; +struct context_data; + +static void addContext(ALCdevice *, ALCcontext *); +static void syncContexts(ALCcontext *master, ALCcontext *slave); +static void syncSources(ALsource *master, ALsource *slave, + ALCcontext *masterCtx, ALCcontext *slaveCtx); + +static void syncSourcei(ALuint master, ALuint slave, + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); +static void syncSourcef(ALuint master, ALuint slave, + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); +static void syncSource3f(ALuint master, ALuint slave, + ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param); + +static void swapInContext(ALCdevice *, struct context_data *); +static void saveContext(ALCdevice *, struct context_data *); +static void limitContext(ALCdevice *, ALCcontext *); +static void unLimitContext(ALCdevice *); + +static void init(ALCdevice *); +static void renderData(ALCdevice *, int samples); + +#define UNUSED(x) (void)(x) + +//////////////////// State + +typedef struct context_data { + ALfloat ClickRemoval[MAXCHANNELS]; + ALfloat PendingClicks[MAXCHANNELS]; + ALvoid *renderBuffer; + ALCcontext *ctx; +} context_data; + +typedef struct send_data { + ALuint size; + context_data **contexts; + ALuint numContexts; + ALuint maxContexts; +} send_data; + + + +//////////////////// Context Creation / Synchronization + +#define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \ + void NAME (ALuint sourceID1, ALuint sourceID2, \ + ALCcontext *ctx1, ALCcontext *ctx2, \ + ALenum param){ \ + INIT_EXPR; \ + ALCcontext *current = alcGetCurrentContext(); \ + alcMakeContextCurrent(ctx1); \ + GET_EXPR; \ + alcMakeContextCurrent(ctx2); \ + SET_EXPR; \ + alcMakeContextCurrent(current); \ + } + +#define MAKE_SYNC(NAME, TYPE, GET, SET) \ + _MAKE_SYNC(NAME, \ + TYPE value, \ + GET(sourceID1, param, &value), \ + SET(sourceID2, param, value)) + +#define MAKE_SYNC3(NAME, TYPE, GET, SET) \ + _MAKE_SYNC(NAME, \ + TYPE value1; TYPE value2; TYPE value3;, \ + GET(sourceID1, param, &value1, &value2, &value3), \ + SET(sourceID2, param, value1, value2, value3)) + +MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei); +MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef); +MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i); +MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f); + +void syncSources(ALsource *masterSource, ALsource *slaveSource, + ALCcontext *masterCtx, ALCcontext *slaveCtx){ + ALuint master = masterSource->source; + ALuint slave = slaveSource->source; + ALCcontext *current = alcGetCurrentContext(); + + syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET); + syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET); + + syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION); + syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY); + syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION); + + syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE); + syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING); + + alcMakeContextCurrent(masterCtx); + ALint source_type; + alGetSourcei(master, AL_SOURCE_TYPE, &source_type); + + // Only static sources are currently synchronized! + if (AL_STATIC == source_type){ + ALint master_buffer; + ALint slave_buffer; + alGetSourcei(master, AL_BUFFER, &master_buffer); + alcMakeContextCurrent(slaveCtx); + alGetSourcei(slave, AL_BUFFER, &slave_buffer); + if (master_buffer != slave_buffer){ + alSourcei(slave, AL_BUFFER, master_buffer); + } + } + + // Synchronize the state of the two sources. + alcMakeContextCurrent(masterCtx); + ALint masterState; + ALint slaveState; + + alGetSourcei(master, AL_SOURCE_STATE, &masterState); + alcMakeContextCurrent(slaveCtx); + alGetSourcei(slave, AL_SOURCE_STATE, &slaveState); + + if (masterState != slaveState){ + switch (masterState){ + case AL_INITIAL : alSourceRewind(slave); break; + case AL_PLAYING : alSourcePlay(slave); break; + case AL_PAUSED : alSourcePause(slave); break; + case AL_STOPPED : alSourceStop(slave); break; + } + } + // Restore whatever context was previously active. + alcMakeContextCurrent(current); +} + + +void syncContexts(ALCcontext *master, ALCcontext *slave){ + /* If there aren't sufficient sources in slave to mirror + the sources in master, create them. */ + ALCcontext *current = alcGetCurrentContext(); + + UIntMap *masterSourceMap = &(master->SourceMap); + UIntMap *slaveSourceMap = &(slave->SourceMap); + ALuint numMasterSources = masterSourceMap->size; + ALuint numSlaveSources = slaveSourceMap->size; + + alcMakeContextCurrent(slave); + if (numSlaveSources < numMasterSources){ + ALuint numMissingSources = numMasterSources - numSlaveSources; + ALuint newSources[numMissingSources]; + alGenSources(numMissingSources, newSources); + } + + /* Now, slave is gauranteed to have at least as many sources + as master. Sync each source from master to the corresponding + source in slave. */ + int i; + for(i = 0; i < masterSourceMap->size; i++){ + syncSources((ALsource*)masterSourceMap->array[i].value, + (ALsource*)slaveSourceMap->array[i].value, + master, slave); + } + alcMakeContextCurrent(current); +} + +static void addContext(ALCdevice *Device, ALCcontext *context){ + send_data *data = (send_data*)Device->ExtraData; + // expand array if necessary + if (data->numContexts >= data->maxContexts){ + ALuint newMaxContexts = data->maxContexts*2 + 1; + data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data)); + data->maxContexts = newMaxContexts; + } + // create context_data and add it to the main array + context_data *ctxData; + ctxData = (context_data*)calloc(1, sizeof(*ctxData)); + ctxData->renderBuffer = + malloc(BytesFromDevFmt(Device->FmtType) * + Device->NumChan * Device->UpdateSize); + ctxData->ctx = context; + + data->contexts[data->numContexts] = ctxData; + data->numContexts++; +} + + +//////////////////// Context Switching + +/* A device brings along with it two pieces of state + * which have to be swapped in and out with each context. + */ +static void swapInContext(ALCdevice *Device, context_data *ctxData){ + memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); + memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); +} + +static void saveContext(ALCdevice *Device, context_data *ctxData){ + memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS); + memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS); +} + +static ALCcontext **currentContext; +static ALuint currentNumContext; + +/* By default, all contexts are rendered at once for each call to aluMixData. + * This function uses the internals of the ALCdecice struct to temporarly + * cause aluMixData to only render the chosen context. + */ +static void limitContext(ALCdevice *Device, ALCcontext *ctx){ + currentContext = Device->Contexts; + currentNumContext = Device->NumContexts; + Device->Contexts = &ctx; + Device->NumContexts = 1; +} + +static void unLimitContext(ALCdevice *Device){ + Device->Contexts = currentContext; + Device->NumContexts = currentNumContext; +} + + +//////////////////// Main Device Loop + +/* Establish the LWJGL context as the main context, which will + * be synchronized to all the slave contexts + */ +static void init(ALCdevice *Device){ + ALCcontext *masterContext = alcGetCurrentContext(); + addContext(Device, masterContext); +} + + +static void renderData(ALCdevice *Device, int samples){ + if(!Device->Connected){return;} + send_data *data = (send_data*)Device->ExtraData; + ALCcontext *current = alcGetCurrentContext(); + + ALuint i; + for (i = 1; i < data->numContexts; i++){ + syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx); + } + + if ((uint) samples > Device->UpdateSize){ + printf("exceeding internal buffer size; dropping samples\n"); + printf("requested %d; available %d\n", samples, Device->UpdateSize); + samples = (int) Device->UpdateSize; + } + + for (i = 0; i < data->numContexts; i++){ + context_data *ctxData = data->contexts[i]; + ALCcontext *ctx = ctxData->ctx; + alcMakeContextCurrent(ctx); + limitContext(Device, ctx); + swapInContext(Device, ctxData); + aluMixData(Device, ctxData->renderBuffer, samples); + saveContext(Device, ctxData); + unLimitContext(Device); + } + alcMakeContextCurrent(current); +} + + +//////////////////// JNI Methods + +#include "com_aurellem_send_AudioSend.h" + +/* + * Class: com_aurellem_send_AudioSend + * Method: nstep + * Signature: (JI)V + */ +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep +(JNIEnv *env, jclass clazz, jlong device, jint samples){ + UNUSED(env);UNUSED(clazz);UNUSED(device); + renderData((ALCdevice*)((intptr_t)device), samples); +} + +/* + * Class: com_aurellem_send_AudioSend + * Method: ngetSamples + * Signature: (JLjava/nio/ByteBuffer;III)V + */ +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples +(JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position, + jint samples, jint n){ + UNUSED(clazz); + + ALvoid *buffer_address = + ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position)); + ALCdevice *recorder = (ALCdevice*) ((intptr_t)device); + send_data *data = (send_data*)recorder->ExtraData; + if ((ALuint)n > data->numContexts){return;} + + //printf("Want %d samples for listener %d\n", samples, n); + //printf("Device's format type is %d bytes per sample,\n", + // BytesFromDevFmt(recorder->FmtType)); + //printf("and it has %d channels, making for %d requested bytes\n", + // recorder->NumChan, + // BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); + + memcpy(buffer_address, data->contexts[n]->renderBuffer, + BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples); + //samples*sizeof(ALfloat)); +} + +/* + * Class: com_aurellem_send_AudioSend + * Method: naddListener + * Signature: (J)V + */ +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener +(JNIEnv *env, jclass clazz, jlong device){ + UNUSED(env); UNUSED(clazz); + //printf("creating new context via naddListener\n"); + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); + ALCcontext *new = alcCreateContext(Device, NULL); + addContext(Device, new); +} + +/* + * Class: com_aurellem_send_AudioSend + * Method: nsetNthListener3f + * Signature: (IFFFJI)V + */ +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f + (JNIEnv *env, jclass clazz, jint param, + jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){ + UNUSED(env);UNUSED(clazz); + + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); + send_data *data = (send_data*)Device->ExtraData; + + ALCcontext *current = alcGetCurrentContext(); + if ((ALuint)contextNum > data->numContexts){return;} + alcMakeContextCurrent(data->contexts[contextNum]->ctx); + alListener3f(param, v1, v2, v3); + alcMakeContextCurrent(current); +} + +/* + * Class: com_aurellem_send_AudioSend + * Method: nsetNthListenerf + * Signature: (IFJI)V + */ +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf +(JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device, + jint contextNum){ + + UNUSED(env);UNUSED(clazz); + + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); + send_data *data = (send_data*)Device->ExtraData; + + ALCcontext *current = alcGetCurrentContext(); + if ((ALuint)contextNum > data->numContexts){return;} + alcMakeContextCurrent(data->contexts[contextNum]->ctx); + alListenerf(param, v1); + alcMakeContextCurrent(current); +} + +/* + * Class: com_aurellem_send_AudioSend + * Method: ninitDevice + * Signature: (J)V + */ +JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice +(JNIEnv *env, jclass clazz, jlong device){ + UNUSED(env);UNUSED(clazz); + + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); + init(Device); + +} + + +/* + * Class: com_aurellem_send_AudioSend + * Method: ngetAudioFormat + * Signature: (J)Ljavax/sound/sampled/AudioFormat; + */ +JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat +(JNIEnv *env, jclass clazz, jlong device){ + UNUSED(clazz); + jclass AudioFormatClass = + (*env)->FindClass(env, "javax/sound/sampled/AudioFormat"); + jmethodID AudioFormatConstructor = + (*env)->GetMethodID(env, AudioFormatClass, "", "(FIIZZ)V"); + + ALCdevice *Device = (ALCdevice*) ((intptr_t)device); + + //float frequency + + int isSigned; + switch (Device->FmtType) + { + case DevFmtUByte: + case DevFmtUShort: isSigned = 0; break; + default : isSigned = 1; + } + float frequency = Device->Frequency; + int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType)); + int channels = Device->NumChan; + + + //printf("freq = %f, bpf = %d, channels = %d, signed? = %d\n", + // frequency, bitsPerFrame, channels, isSigned); + + jobject format = (*env)-> + NewObject( + env,AudioFormatClass,AudioFormatConstructor, + frequency, + bitsPerFrame, + channels, + isSigned, + 0); + return format; +} + + + +//////////////////// Device Initilization / Management + +static const ALCchar sendDevice[] = "Multiple Audio Send"; + +static ALCboolean send_open_playback(ALCdevice *device, + const ALCchar *deviceName) +{ + send_data *data; + // stop any buffering for stdout, so that I can + // see the printf statements in my terminal immediatley + setbuf(stdout, NULL); + + if(!deviceName) + deviceName = sendDevice; + else if(strcmp(deviceName, sendDevice) != 0) + return ALC_FALSE; + data = (send_data*)calloc(1, sizeof(*data)); + device->szDeviceName = strdup(deviceName); + device->ExtraData = data; + return ALC_TRUE; +} + +static void send_close_playback(ALCdevice *device) +{ + send_data *data = (send_data*)device->ExtraData; + alcMakeContextCurrent(NULL); + ALuint i; + // Destroy all slave contexts. LWJGL will take care of + // its own context. + for (i = 1; i < data->numContexts; i++){ + context_data *ctxData = data->contexts[i]; + alcDestroyContext(ctxData->ctx); + free(ctxData->renderBuffer); + free(ctxData); + } + free(data); + device->ExtraData = NULL; +} + +static ALCboolean send_reset_playback(ALCdevice *device) +{ + SetDefaultWFXChannelOrder(device); + return ALC_TRUE; +} + +static void send_stop_playback(ALCdevice *Device){ + UNUSED(Device); +} + +static const BackendFuncs send_funcs = { + send_open_playback, + send_close_playback, + send_reset_playback, + send_stop_playback, + NULL, + NULL, /* These would be filled with functions to */ + NULL, /* handle capturing audio if we we into that */ + NULL, /* sort of thing... */ + NULL, + NULL +}; + +ALCboolean alc_send_init(BackendFuncs *func_list){ + *func_list = send_funcs; + return ALC_TRUE; +} + +void alc_send_deinit(void){} + +void alc_send_probe(enum DevProbe type) +{ + switch(type) + { + case DEVICE_PROBE: + AppendDeviceList(sendDevice); + break; + case ALL_DEVICE_PROBE: + AppendAllDeviceList(sendDevice); + break; + case CAPTURE_DEVICE_PROBE: + break; + } +} +#+end_src + + + + + + + + +#+srcname: ears #+begin_src clojure -(ns body.ear) +(ns cortex.hearing) (use 'cortex.world) (use 'cortex.import) (use 'clojure.contrib.def) @@ -41,21 +633,7 @@ (import javax.swing.ImageIcon) (import javax.swing.JOptionPane) (import java.awt.image.ImageObserver) -#+end_src -JMonkeyEngine3's audio system works as follows: -first, an appropiate audio renderer is created during initialization -and depending on the context. On my computer, this is the -LwjglAudioRenderer. - -The LwjglAudioRenderer sets a few internal state variables depending -on what capabilities the audio system has. - -may very well need to make my own AudioRenderer - -#+srcname: ear-body-1 -#+begin_src clojure :results silent -(in-ns 'body.ear) (import 'com.jme3.capture.SoundProcessor) @@ -72,7 +650,6 @@ (.get audioSamples byte-array 0 numSamples) (continuation (vec byte-array))))))) - (defn add-ear "add an ear to the world. The continuation function will be called @@ -122,16 +699,19 @@ +* Example + * COMMENT Code Generation -#+begin_src clojure :tangle /home/r/cortex/src/body/ear.clj -<> -<> +#+begin_src clojure :tangle ../../cortex/src/cortex/hearing.clj +<> #+end_src - -#+begin_src clojure :tangle /home/r/cortex/src/test/hearing.clj +#+begin_src clojure :tangle ../../cortex/src/test/hearing.clj <> #+end_src +#+begin_src C :tangle ../Alc/backends/send.c +<> +#+end_src