Mercurial > audio-send
comparison cleanup-message.txt @ 0:f9476ff7637e
initial forking of open-al to create multiple listeners
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Tue, 25 Oct 2011 13:02:31 -0700 |
parents | |
children |
comparison
equal
deleted
inserted
replaced
-1:000000000000 | 0:f9476ff7637e |
---|---|
1 My name is Robert McIntyre. I am seeking help packaging some changes | |
2 I've made to open-al. | |
3 | |
4 * tl;dr how do I distribute changes to open-al which involve adding a | |
5 new device? | |
6 | |
7 * Background / Motivation | |
8 | |
9 I'm working on an AI simulation involving multiple listeners, where | |
10 each listener is a separate AI entity. Since each AI entity can move | |
11 independently, I needed to add multiple listener functionality to | |
12 open-al. Furthermore, my entire simulation allows time to dilate | |
13 depending on how hard the entities are collectively "thinking," so | |
14 that the entities can keep up with the simulation. Therefore, I | |
15 needed to have a system that renders sound on-demand instead of trying | |
16 to render in real time as open-al does. | |
17 | |
18 I don't need any of the more advanced effects, just 3D positioning, | |
19 and I'm only using open-al from jMonkeyEngine3 that uses the LWJGL | |
20 bindings that importantly only allow access to one and only one | |
21 context. | |
22 | |
23 Under these constraints, I made a new device which renders sound in | |
24 small, user-defined increments. It must be explicitly told to render | |
25 sound or it won't do anything. It maintains a separate "auxiliary | |
26 context" for every additional listener, and syncs the sources from the | |
27 LWJGL context whenever it is about to render samples. I've tested it | |
28 and it works quite well for my purposes. So far, I've gotten 1,000 | |
29 separate listeners to work in a single simulation easily. | |
30 | |
31 The code is here: | |
32 http://aurellem.localhost/audio-send/html/send.html | |
33 No criticism is too harsh! | |
34 | |
35 Note that the java JNI bindings that are part of the device code right | |
36 now would be moved to a separate file the the same way that LWJGL does | |
37 it. I left them there for now to show how the device might be used. | |
38 | |
39 Although I made this for my own AI work, it's ideal for recording | |
40 perfect audio from a video game to create trailers/demos, since the | |
41 computer doesn't have to try to record the sound in real time. This | |
42 device could be used to record audio in any system that wraps open-al | |
43 and only exposes one context, which is what many wrappers do. | |
44 | |
45 | |
46 * Actual Question | |
47 | |
48 My question is about packaging --- how do you recommend I distribute | |
49 my new device? I got it to work by just grafting it on the open-al's | |
50 primitive object system, but this requires quite a few changes to main | |
51 open-al source files, and as far as I can tell requires me to | |
52 recompile open-al against my new device. | |
53 | |
54 I also don't want the user to be able to hide my devices presence | |
55 using their ~/.alsoftrc file, since that gets in the way of easy | |
56 recording when the system is wrapped several layers deep, and they've | |
57 already implicitly requested my device anyway by using my code in the | |
58 first place. | |
59 | |
60 The options I have thought of so far are: | |
61 | |
62 1.) Create my own C-artifact, compiled against open-al, which somehow | |
63 "hooks in" my device to open-al and forces open-al to use it to the | |
64 exclusion of all other devices. This new artifact would have bindings | |
65 for java, etc. I don't know how to do this, since there doesn't seem | |
66 to be any way to access the list of devices in Alc/ALc.c for example. | |
67 In order to add a new device to open-al I had to modify 5 separate | |
68 files, documented here: | |
69 | |
70 http://aurellem.localhost/audio-send/html/add-new-device.html | |
71 | |
72 and there doesn't seem to be a way to get the same effect | |
73 programmatically. | |
74 | |
75 2.) Strip down open-al to a minimal version that only has my device | |
76 and deal with selecting the right open-al library at a higher level, | |
77 depending on whether the user wants to record sound or actually hear | |
78 it. I don't like this because I can't easily benefit from | |
79 improvements in the main open-al distribution. It also would involve | |
80 more significant modification to jMonkeyEngine3's logic which selects | |
81 the appropriate C-artifact at runtime. | |
82 | |
83 3.) Get this new device added to open-al, and provide a java wrapper | |
84 for it in a separate artifact. Problem here is that this device does | |
85 not have the same semantics as the other devices --- it must be told | |
86 to render sound, doesn't support multiple user-created contexts, and | |
87 it exposes extra functions for retrieving the rendered sounds. It also | |
88 might be too "niche" for open-al proper. | |
89 | |
90 4.) Maybe abandon the device metaphor and use something better suited | |
91 to my problem that /can/ be done as in (1)? | |
92 | |
93 | |
94 I'm sure someone here knows enough about open-al's devices to give me | |
95 a better solution than these 4! All help would be most appreciated. | |
96 | |
97 sincerely, | |
98 --Robert McIntyre | |
99 | |
100 |