annotate org/cleanup-message.txt @ 9:a988ea53d982

added description of JNI stuff to README
author Robert McIntyre <rlm@mit.edu>
date Thu, 27 Oct 2011 02:47:28 -0700
parents ed256a687dfe
children
rev   line source
rlm@0 1 My name is Robert McIntyre. I am seeking help packaging some changes
rlm@0 2 I've made to open-al.
rlm@0 3
rlm@0 4 * tl;dr how do I distribute changes to open-al which involve adding a
rlm@0 5 new device?
rlm@0 6
rlm@0 7 * Background / Motivation
rlm@0 8
rlm@0 9 I'm working on an AI simulation involving multiple listeners, where
rlm@0 10 each listener is a separate AI entity. Since each AI entity can move
rlm@0 11 independently, I needed to add multiple listener functionality to
rlm@0 12 open-al. Furthermore, my entire simulation allows time to dilate
rlm@0 13 depending on how hard the entities are collectively "thinking," so
rlm@0 14 that the entities can keep up with the simulation. Therefore, I
rlm@0 15 needed to have a system that renders sound on-demand instead of trying
rlm@0 16 to render in real time as open-al does.
rlm@0 17
rlm@0 18 I don't need any of the more advanced effects, just 3D positioning,
rlm@0 19 and I'm only using open-al from jMonkeyEngine3 that uses the LWJGL
rlm@0 20 bindings that importantly only allow access to one and only one
rlm@0 21 context.
rlm@0 22
rlm@0 23 Under these constraints, I made a new device which renders sound in
rlm@0 24 small, user-defined increments. It must be explicitly told to render
rlm@0 25 sound or it won't do anything. It maintains a separate "auxiliary
rlm@0 26 context" for every additional listener, and syncs the sources from the
rlm@0 27 LWJGL context whenever it is about to render samples. I've tested it
rlm@0 28 and it works quite well for my purposes. So far, I've gotten 1,000
rlm@0 29 separate listeners to work in a single simulation easily.
rlm@0 30
rlm@0 31 The code is here:
rlm@0 32 http://aurellem.localhost/audio-send/html/send.html
rlm@0 33 No criticism is too harsh!
rlm@0 34
rlm@0 35 Note that the java JNI bindings that are part of the device code right
rlm@0 36 now would be moved to a separate file the the same way that LWJGL does
rlm@0 37 it. I left them there for now to show how the device might be used.
rlm@0 38
rlm@0 39 Although I made this for my own AI work, it's ideal for recording
rlm@0 40 perfect audio from a video game to create trailers/demos, since the
rlm@0 41 computer doesn't have to try to record the sound in real time. This
rlm@0 42 device could be used to record audio in any system that wraps open-al
rlm@0 43 and only exposes one context, which is what many wrappers do.
rlm@0 44
rlm@0 45
rlm@0 46 * Actual Question
rlm@0 47
rlm@0 48 My question is about packaging --- how do you recommend I distribute
rlm@0 49 my new device? I got it to work by just grafting it on the open-al's
rlm@0 50 primitive object system, but this requires quite a few changes to main
rlm@0 51 open-al source files, and as far as I can tell requires me to
rlm@0 52 recompile open-al against my new device.
rlm@0 53
rlm@0 54 I also don't want the user to be able to hide my devices presence
rlm@0 55 using their ~/.alsoftrc file, since that gets in the way of easy
rlm@0 56 recording when the system is wrapped several layers deep, and they've
rlm@0 57 already implicitly requested my device anyway by using my code in the
rlm@0 58 first place.
rlm@0 59
rlm@0 60 The options I have thought of so far are:
rlm@0 61
rlm@0 62 1.) Create my own C-artifact, compiled against open-al, which somehow
rlm@0 63 "hooks in" my device to open-al and forces open-al to use it to the
rlm@0 64 exclusion of all other devices. This new artifact would have bindings
rlm@0 65 for java, etc. I don't know how to do this, since there doesn't seem
rlm@0 66 to be any way to access the list of devices in Alc/ALc.c for example.
rlm@0 67 In order to add a new device to open-al I had to modify 5 separate
rlm@0 68 files, documented here:
rlm@0 69
rlm@0 70 http://aurellem.localhost/audio-send/html/add-new-device.html
rlm@0 71
rlm@0 72 and there doesn't seem to be a way to get the same effect
rlm@0 73 programmatically.
rlm@0 74
rlm@0 75 2.) Strip down open-al to a minimal version that only has my device
rlm@0 76 and deal with selecting the right open-al library at a higher level,
rlm@0 77 depending on whether the user wants to record sound or actually hear
rlm@0 78 it. I don't like this because I can't easily benefit from
rlm@0 79 improvements in the main open-al distribution. It also would involve
rlm@0 80 more significant modification to jMonkeyEngine3's logic which selects
rlm@0 81 the appropriate C-artifact at runtime.
rlm@0 82
rlm@0 83 3.) Get this new device added to open-al, and provide a java wrapper
rlm@0 84 for it in a separate artifact. Problem here is that this device does
rlm@0 85 not have the same semantics as the other devices --- it must be told
rlm@0 86 to render sound, doesn't support multiple user-created contexts, and
rlm@0 87 it exposes extra functions for retrieving the rendered sounds. It also
rlm@0 88 might be too "niche" for open-al proper.
rlm@0 89
rlm@0 90 4.) Maybe abandon the device metaphor and use something better suited
rlm@0 91 to my problem that /can/ be done as in (1)?
rlm@0 92
rlm@0 93
rlm@0 94 I'm sure someone here knows enough about open-al's devices to give me
rlm@0 95 a better solution than these 4! All help would be most appreciated.
rlm@0 96
rlm@0 97 sincerely,
rlm@0 98 --Robert McIntyre
rlm@0 99
rlm@0 100