Mercurial > audio-send
view org/cleanup-message.txt @ 18:1e201037f666
separating out the sections of send.c
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Thu, 03 Nov 2011 13:32:27 -0700 |
parents | ed256a687dfe |
children |
line wrap: on
line source
1 My name is Robert McIntyre. I am seeking help packaging some changes2 I've made to open-al.4 * tl;dr how do I distribute changes to open-al which involve adding a5 new device?7 * Background / Motivation9 I'm working on an AI simulation involving multiple listeners, where10 each listener is a separate AI entity. Since each AI entity can move11 independently, I needed to add multiple listener functionality to12 open-al. Furthermore, my entire simulation allows time to dilate13 depending on how hard the entities are collectively "thinking," so14 that the entities can keep up with the simulation. Therefore, I15 needed to have a system that renders sound on-demand instead of trying16 to render in real time as open-al does.18 I don't need any of the more advanced effects, just 3D positioning,19 and I'm only using open-al from jMonkeyEngine3 that uses the LWJGL20 bindings that importantly only allow access to one and only one21 context.23 Under these constraints, I made a new device which renders sound in24 small, user-defined increments. It must be explicitly told to render25 sound or it won't do anything. It maintains a separate "auxiliary26 context" for every additional listener, and syncs the sources from the27 LWJGL context whenever it is about to render samples. I've tested it28 and it works quite well for my purposes. So far, I've gotten 1,00029 separate listeners to work in a single simulation easily.31 The code is here:32 http://aurellem.localhost/audio-send/html/send.html33 No criticism is too harsh!35 Note that the java JNI bindings that are part of the device code right36 now would be moved to a separate file the the same way that LWJGL does37 it. I left them there for now to show how the device might be used.39 Although I made this for my own AI work, it's ideal for recording40 perfect audio from a video game to create trailers/demos, since the41 computer doesn't have to try to record the sound in real time. This42 device could be used to record audio in any system that wraps open-al43 and only exposes one context, which is what many wrappers do.46 * Actual Question48 My question is about packaging --- how do you recommend I distribute49 my new device? I got it to work by just grafting it on the open-al's50 primitive object system, but this requires quite a few changes to main51 open-al source files, and as far as I can tell requires me to52 recompile open-al against my new device.54 I also don't want the user to be able to hide my devices presence55 using their ~/.alsoftrc file, since that gets in the way of easy56 recording when the system is wrapped several layers deep, and they've57 already implicitly requested my device anyway by using my code in the58 first place.60 The options I have thought of so far are:62 1.) Create my own C-artifact, compiled against open-al, which somehow63 "hooks in" my device to open-al and forces open-al to use it to the64 exclusion of all other devices. This new artifact would have bindings65 for java, etc. I don't know how to do this, since there doesn't seem66 to be any way to access the list of devices in Alc/ALc.c for example.67 In order to add a new device to open-al I had to modify 5 separate68 files, documented here:70 http://aurellem.localhost/audio-send/html/add-new-device.html72 and there doesn't seem to be a way to get the same effect73 programmatically.75 2.) Strip down open-al to a minimal version that only has my device76 and deal with selecting the right open-al library at a higher level,77 depending on whether the user wants to record sound or actually hear78 it. I don't like this because I can't easily benefit from79 improvements in the main open-al distribution. It also would involve80 more significant modification to jMonkeyEngine3's logic which selects81 the appropriate C-artifact at runtime.83 3.) Get this new device added to open-al, and provide a java wrapper84 for it in a separate artifact. Problem here is that this device does85 not have the same semantics as the other devices --- it must be told86 to render sound, doesn't support multiple user-created contexts, and87 it exposes extra functions for retrieving the rendered sounds. It also88 might be too "niche" for open-al proper.90 4.) Maybe abandon the device metaphor and use something better suited91 to my problem that /can/ be done as in (1)?94 I'm sure someone here knows enough about open-al's devices to give me95 a better solution than these 4! All help would be most appreciated.97 sincerely,98 --Robert McIntyre