rlm@162
|
1 #+title: Simulated Sense of Hearing
|
rlm@162
|
2 #+author: Robert McIntyre
|
rlm@162
|
3 #+email: rlm@mit.edu
|
rlm@162
|
4 #+description: Simulating multiple listeners and the sense of hearing in jMonkeyEngine3
|
rlm@162
|
5 #+keywords: simulated hearing, openal, clojure, jMonkeyEngine3, LWJGL, AI
|
rlm@162
|
6 #+SETUPFILE: ../../aurellem/org/setup.org
|
rlm@162
|
7 #+INCLUDE: ../../aurellem/org/level-0.org
|
rlm@328
|
8
|
rlm@162
|
9
|
rlm@162
|
10 * Hearing
|
rlm@162
|
11
|
rlm@220
|
12 At the end of this post I will have simulated ears that work the same
|
rlm@220
|
13 way as the simulated eyes in the last post. I will be able to place
|
rlm@220
|
14 any number of ear-nodes in a blender file, and they will bind to the
|
rlm@220
|
15 closest physical object and follow it as it moves around. Each ear
|
rlm@220
|
16 will provide access to the sound data it picks up between every frame.
|
rlm@162
|
17
|
rlm@162
|
18 Hearing is one of the more difficult senses to simulate, because there
|
rlm@162
|
19 is less support for obtaining the actual sound data that is processed
|
rlm@220
|
20 by jMonkeyEngine3. There is no "split-screen" support for rendering
|
rlm@220
|
21 sound from different points of view, and there is no way to directly
|
rlm@220
|
22 access the rendered sound data.
|
rlm@220
|
23
|
rlm@220
|
24 ** Brief Description of jMonkeyEngine's Sound System
|
rlm@162
|
25
|
rlm@162
|
26 jMonkeyEngine's sound system works as follows:
|
rlm@162
|
27
|
rlm@162
|
28 - jMonkeyEngine uses the =AppSettings= for the particular application
|
rlm@472
|
29 to determine what sort of =AudioRenderer= should be used.
|
rlm@472
|
30 - Although some support is provided for multiple AudioRendering
|
rlm@162
|
31 backends, jMonkeyEngine at the time of this writing will either
|
rlm@220
|
32 pick no =AudioRenderer= at all, or the =LwjglAudioRenderer=.
|
rlm@162
|
33 - jMonkeyEngine tries to figure out what sort of system you're
|
rlm@162
|
34 running and extracts the appropriate native libraries.
|
rlm@220
|
35 - The =LwjglAudioRenderer= uses the [[http://lwjgl.org/][=LWJGL=]] (LightWeight Java Game
|
rlm@162
|
36 Library) bindings to interface with a C library called [[http://kcat.strangesoft.net/openal.html][=OpenAL=]]
|
rlm@220
|
37 - =OpenAL= renders the 3D sound and feeds the rendered sound directly
|
rlm@220
|
38 to any of various sound output devices with which it knows how to
|
rlm@220
|
39 communicate.
|
rlm@162
|
40
|
rlm@162
|
41 A consequence of this is that there's no way to access the actual
|
rlm@220
|
42 sound data produced by =OpenAL=. Even worse, =OpenAL= only supports
|
rlm@220
|
43 one /listener/ (it renders sound data from only one perspective),
|
rlm@220
|
44 which normally isn't a problem for games, but becomes a problem when
|
rlm@220
|
45 trying to make multiple AI creatures that can each hear the world from
|
rlm@220
|
46 a different perspective.
|
rlm@162
|
47
|
rlm@162
|
48 To make many AI creatures in jMonkeyEngine that can each hear the
|
rlm@220
|
49 world from their own perspective, or to make a single creature with
|
rlm@220
|
50 many ears, it is necessary to go all the way back to =OpenAL= and
|
rlm@220
|
51 implement support for simulated hearing there.
|
rlm@162
|
52
|
rlm@162
|
53 * Extending =OpenAL=
|
rlm@162
|
54 ** =OpenAL= Devices
|
rlm@162
|
55
|
rlm@162
|
56 =OpenAL= goes to great lengths to support many different systems, all
|
rlm@162
|
57 with different sound capabilities and interfaces. It accomplishes this
|
rlm@162
|
58 difficult task by providing code for many different sound backends in
|
rlm@162
|
59 pseudo-objects called /Devices/. There's a device for the Linux Open
|
rlm@162
|
60 Sound System and the Advanced Linux Sound Architecture, there's one
|
rlm@162
|
61 for Direct Sound on Windows, there's even one for Solaris. =OpenAL=
|
rlm@162
|
62 solves the problem of platform independence by providing all these
|
rlm@162
|
63 Devices.
|
rlm@162
|
64
|
rlm@162
|
65 Wrapper libraries such as LWJGL are free to examine the system on
|
rlm@162
|
66 which they are running and then select an appropriate device for that
|
rlm@162
|
67 system.
|
rlm@162
|
68
|
rlm@162
|
69 There are also a few "special" devices that don't interface with any
|
rlm@162
|
70 particular system. These include the Null Device, which doesn't do
|
rlm@162
|
71 anything, and the Wave Device, which writes whatever sound it receives
|
rlm@162
|
72 to a file, if everything has been set up correctly when configuring
|
rlm@162
|
73 =OpenAL=.
|
rlm@162
|
74
|
rlm@162
|
75 Actual mixing of the sound data happens in the Devices, and they are
|
rlm@162
|
76 the only point in the sound rendering process where this data is
|
rlm@162
|
77 available.
|
rlm@162
|
78
|
rlm@162
|
79 Therefore, in order to support multiple listeners, and get the sound
|
rlm@162
|
80 data in a form that the AIs can use, it is necessary to create a new
|
rlm@472
|
81 Device which supports this feature.
|
rlm@162
|
82
|
rlm@162
|
83 ** The Send Device
|
rlm@162
|
84 Adding a device to OpenAL is rather tricky -- there are five separate
|
rlm@162
|
85 files in the =OpenAL= source tree that must be modified to do so. I've
|
rlm@220
|
86 documented this process [[../../audio-send/html/add-new-device.html][here]] for anyone who is interested.
|
rlm@162
|
87
|
rlm@220
|
88 Again, my objectives are:
|
rlm@162
|
89
|
rlm@162
|
90 - Support Multiple Listeners from jMonkeyEngine3
|
rlm@162
|
91 - Get access to the rendered sound data for further processing from
|
rlm@162
|
92 clojure.
|
rlm@162
|
93
|
rlm@306
|
94 I named it the "Multiple Audio Send" Device, or =Send= Device for
|
rlm@306
|
95 short, since it sends audio data back to the calling application like
|
rlm@220
|
96 an Aux-Send cable on a mixing board.
|
rlm@220
|
97
|
rlm@220
|
98 Onward to the actual Device!
|
rlm@220
|
99
|
rlm@162
|
100 ** =send.c=
|
rlm@162
|
101
|
rlm@162
|
102 ** Header
|
rlm@162
|
103 #+name: send-header
|
rlm@162
|
104 #+begin_src C
|
rlm@162
|
105 #include "config.h"
|
rlm@162
|
106 #include <stdlib.h>
|
rlm@162
|
107 #include "alMain.h"
|
rlm@162
|
108 #include "AL/al.h"
|
rlm@162
|
109 #include "AL/alc.h"
|
rlm@162
|
110 #include "alSource.h"
|
rlm@162
|
111 #include <jni.h>
|
rlm@162
|
112
|
rlm@162
|
113 //////////////////// Summary
|
rlm@162
|
114
|
rlm@162
|
115 struct send_data;
|
rlm@162
|
116 struct context_data;
|
rlm@162
|
117
|
rlm@162
|
118 static void addContext(ALCdevice *, ALCcontext *);
|
rlm@162
|
119 static void syncContexts(ALCcontext *master, ALCcontext *slave);
|
rlm@162
|
120 static void syncSources(ALsource *master, ALsource *slave,
|
rlm@162
|
121 ALCcontext *masterCtx, ALCcontext *slaveCtx);
|
rlm@162
|
122
|
rlm@162
|
123 static void syncSourcei(ALuint master, ALuint slave,
|
rlm@162
|
124 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
|
rlm@162
|
125 static void syncSourcef(ALuint master, ALuint slave,
|
rlm@162
|
126 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
|
rlm@162
|
127 static void syncSource3f(ALuint master, ALuint slave,
|
rlm@162
|
128 ALCcontext *masterCtx, ALCcontext *ctx2, ALenum param);
|
rlm@162
|
129
|
rlm@162
|
130 static void swapInContext(ALCdevice *, struct context_data *);
|
rlm@162
|
131 static void saveContext(ALCdevice *, struct context_data *);
|
rlm@162
|
132 static void limitContext(ALCdevice *, ALCcontext *);
|
rlm@162
|
133 static void unLimitContext(ALCdevice *);
|
rlm@162
|
134
|
rlm@162
|
135 static void init(ALCdevice *);
|
rlm@162
|
136 static void renderData(ALCdevice *, int samples);
|
rlm@162
|
137
|
rlm@162
|
138 #define UNUSED(x) (void)(x)
|
rlm@162
|
139 #+end_src
|
rlm@162
|
140
|
rlm@162
|
141 The main idea behind the Send device is to take advantage of the fact
|
rlm@162
|
142 that LWJGL only manages one /context/ when using OpenAL. A /context/
|
rlm@162
|
143 is like a container that holds samples and keeps track of where the
|
rlm@162
|
144 listener is. In order to support multiple listeners, the Send device
|
rlm@162
|
145 identifies the LWJGL context as the master context, and creates any
|
rlm@162
|
146 number of slave contexts to represent additional listeners. Every
|
rlm@162
|
147 time the device renders sound, it synchronizes every source from the
|
rlm@162
|
148 master LWJGL context to the slave contexts. Then, it renders each
|
rlm@162
|
149 context separately, using a different listener for each one. The
|
rlm@162
|
150 rendered sound is made available via JNI to jMonkeyEngine.
|
rlm@162
|
151
|
rlm@162
|
152 To recap, the process is:
|
rlm@162
|
153 - Set the LWJGL context as "master" in the =init()= method.
|
rlm@162
|
154 - Create any number of additional contexts via =addContext()=
|
rlm@162
|
155 - At every call to =renderData()= sync the master context with the
|
rlm@162
|
156 slave contexts with =syncContexts()=
|
rlm@162
|
157 - =syncContexts()= calls =syncSources()= to sync all the sources
|
rlm@162
|
158 which are in the master context.
|
rlm@162
|
159 - =limitContext()= and =unLimitContext()= make it possible to render
|
rlm@162
|
160 only one context at a time.
|
rlm@162
|
161
|
rlm@162
|
162 ** Necessary State
|
rlm@162
|
163 #+name: send-state
|
rlm@162
|
164 #+begin_src C
|
rlm@162
|
165 //////////////////// State
|
rlm@162
|
166
|
rlm@162
|
167 typedef struct context_data {
|
rlm@162
|
168 ALfloat ClickRemoval[MAXCHANNELS];
|
rlm@162
|
169 ALfloat PendingClicks[MAXCHANNELS];
|
rlm@162
|
170 ALvoid *renderBuffer;
|
rlm@162
|
171 ALCcontext *ctx;
|
rlm@162
|
172 } context_data;
|
rlm@162
|
173
|
rlm@162
|
174 typedef struct send_data {
|
rlm@162
|
175 ALuint size;
|
rlm@162
|
176 context_data **contexts;
|
rlm@162
|
177 ALuint numContexts;
|
rlm@162
|
178 ALuint maxContexts;
|
rlm@162
|
179 } send_data;
|
rlm@162
|
180 #+end_src
|
rlm@162
|
181
|
rlm@162
|
182 Switching between contexts is not the normal operation of a Device,
|
rlm@162
|
183 and one of the problems with doing so is that a Device normally keeps
|
rlm@162
|
184 around a few pieces of state such as the =ClickRemoval= array above
|
rlm@220
|
185 which will become corrupted if the contexts are not rendered in
|
rlm@162
|
186 parallel. The solution is to create a copy of this normally global
|
rlm@162
|
187 device state for each context, and copy it back and forth into and out
|
rlm@162
|
188 of the actual device state whenever a context is rendered.
|
rlm@162
|
189
|
rlm@162
|
190 ** Synchronization Macros
|
rlm@162
|
191 #+name: sync-macros
|
rlm@162
|
192 #+begin_src C
|
rlm@162
|
193 //////////////////// Context Creation / Synchronization
|
rlm@162
|
194
|
rlm@162
|
195 #define _MAKE_SYNC(NAME, INIT_EXPR, GET_EXPR, SET_EXPR) \
|
rlm@162
|
196 void NAME (ALuint sourceID1, ALuint sourceID2, \
|
rlm@162
|
197 ALCcontext *ctx1, ALCcontext *ctx2, \
|
rlm@162
|
198 ALenum param){ \
|
rlm@162
|
199 INIT_EXPR; \
|
rlm@162
|
200 ALCcontext *current = alcGetCurrentContext(); \
|
rlm@162
|
201 alcMakeContextCurrent(ctx1); \
|
rlm@162
|
202 GET_EXPR; \
|
rlm@162
|
203 alcMakeContextCurrent(ctx2); \
|
rlm@162
|
204 SET_EXPR; \
|
rlm@162
|
205 alcMakeContextCurrent(current); \
|
rlm@162
|
206 }
|
rlm@162
|
207
|
rlm@162
|
208 #define MAKE_SYNC(NAME, TYPE, GET, SET) \
|
rlm@162
|
209 _MAKE_SYNC(NAME, \
|
rlm@162
|
210 TYPE value, \
|
rlm@162
|
211 GET(sourceID1, param, &value), \
|
rlm@162
|
212 SET(sourceID2, param, value))
|
rlm@162
|
213
|
rlm@162
|
214 #define MAKE_SYNC3(NAME, TYPE, GET, SET) \
|
rlm@162
|
215 _MAKE_SYNC(NAME, \
|
rlm@162
|
216 TYPE value1; TYPE value2; TYPE value3;, \
|
rlm@162
|
217 GET(sourceID1, param, &value1, &value2, &value3), \
|
rlm@162
|
218 SET(sourceID2, param, value1, value2, value3))
|
rlm@162
|
219
|
rlm@162
|
220 MAKE_SYNC( syncSourcei, ALint, alGetSourcei, alSourcei);
|
rlm@162
|
221 MAKE_SYNC( syncSourcef, ALfloat, alGetSourcef, alSourcef);
|
rlm@162
|
222 MAKE_SYNC3(syncSource3i, ALint, alGetSource3i, alSource3i);
|
rlm@162
|
223 MAKE_SYNC3(syncSource3f, ALfloat, alGetSource3f, alSource3f);
|
rlm@162
|
224
|
rlm@162
|
225 #+end_src
|
rlm@162
|
226
|
rlm@162
|
227 Setting the state of an =OpenAL= source is done with the =alSourcei=,
|
rlm@162
|
228 =alSourcef=, =alSource3i=, and =alSource3f= functions. In order to
|
rlm@162
|
229 completely synchronize two sources, it is necessary to use all of
|
rlm@162
|
230 them. These macros help to condense the otherwise repetitive
|
rlm@162
|
231 synchronization code involving these similar low-level =OpenAL= functions.
|
rlm@162
|
232
|
rlm@162
|
233 ** Source Synchronization
|
rlm@162
|
234 #+name: sync-sources
|
rlm@162
|
235 #+begin_src C
|
rlm@162
|
236 void syncSources(ALsource *masterSource, ALsource *slaveSource,
|
rlm@162
|
237 ALCcontext *masterCtx, ALCcontext *slaveCtx){
|
rlm@162
|
238 ALuint master = masterSource->source;
|
rlm@162
|
239 ALuint slave = slaveSource->source;
|
rlm@162
|
240 ALCcontext *current = alcGetCurrentContext();
|
rlm@162
|
241
|
rlm@162
|
242 syncSourcef(master,slave,masterCtx,slaveCtx,AL_PITCH);
|
rlm@162
|
243 syncSourcef(master,slave,masterCtx,slaveCtx,AL_GAIN);
|
rlm@162
|
244 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_DISTANCE);
|
rlm@162
|
245 syncSourcef(master,slave,masterCtx,slaveCtx,AL_ROLLOFF_FACTOR);
|
rlm@162
|
246 syncSourcef(master,slave,masterCtx,slaveCtx,AL_REFERENCE_DISTANCE);
|
rlm@162
|
247 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MIN_GAIN);
|
rlm@162
|
248 syncSourcef(master,slave,masterCtx,slaveCtx,AL_MAX_GAIN);
|
rlm@162
|
249 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_GAIN);
|
rlm@162
|
250 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_INNER_ANGLE);
|
rlm@162
|
251 syncSourcef(master,slave,masterCtx,slaveCtx,AL_CONE_OUTER_ANGLE);
|
rlm@162
|
252 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SEC_OFFSET);
|
rlm@162
|
253 syncSourcef(master,slave,masterCtx,slaveCtx,AL_SAMPLE_OFFSET);
|
rlm@162
|
254 syncSourcef(master,slave,masterCtx,slaveCtx,AL_BYTE_OFFSET);
|
rlm@162
|
255
|
rlm@162
|
256 syncSource3f(master,slave,masterCtx,slaveCtx,AL_POSITION);
|
rlm@162
|
257 syncSource3f(master,slave,masterCtx,slaveCtx,AL_VELOCITY);
|
rlm@162
|
258 syncSource3f(master,slave,masterCtx,slaveCtx,AL_DIRECTION);
|
rlm@162
|
259
|
rlm@162
|
260 syncSourcei(master,slave,masterCtx,slaveCtx,AL_SOURCE_RELATIVE);
|
rlm@162
|
261 syncSourcei(master,slave,masterCtx,slaveCtx,AL_LOOPING);
|
rlm@162
|
262
|
rlm@162
|
263 alcMakeContextCurrent(masterCtx);
|
rlm@162
|
264 ALint source_type;
|
rlm@162
|
265 alGetSourcei(master, AL_SOURCE_TYPE, &source_type);
|
rlm@162
|
266
|
rlm@162
|
267 // Only static sources are currently synchronized!
|
rlm@162
|
268 if (AL_STATIC == source_type){
|
rlm@162
|
269 ALint master_buffer;
|
rlm@162
|
270 ALint slave_buffer;
|
rlm@162
|
271 alGetSourcei(master, AL_BUFFER, &master_buffer);
|
rlm@162
|
272 alcMakeContextCurrent(slaveCtx);
|
rlm@162
|
273 alGetSourcei(slave, AL_BUFFER, &slave_buffer);
|
rlm@162
|
274 if (master_buffer != slave_buffer){
|
rlm@162
|
275 alSourcei(slave, AL_BUFFER, master_buffer);
|
rlm@162
|
276 }
|
rlm@162
|
277 }
|
rlm@162
|
278
|
rlm@162
|
279 // Synchronize the state of the two sources.
|
rlm@162
|
280 alcMakeContextCurrent(masterCtx);
|
rlm@162
|
281 ALint masterState;
|
rlm@162
|
282 ALint slaveState;
|
rlm@162
|
283
|
rlm@162
|
284 alGetSourcei(master, AL_SOURCE_STATE, &masterState);
|
rlm@162
|
285 alcMakeContextCurrent(slaveCtx);
|
rlm@162
|
286 alGetSourcei(slave, AL_SOURCE_STATE, &slaveState);
|
rlm@162
|
287
|
rlm@162
|
288 if (masterState != slaveState){
|
rlm@162
|
289 switch (masterState){
|
rlm@162
|
290 case AL_INITIAL : alSourceRewind(slave); break;
|
rlm@162
|
291 case AL_PLAYING : alSourcePlay(slave); break;
|
rlm@162
|
292 case AL_PAUSED : alSourcePause(slave); break;
|
rlm@162
|
293 case AL_STOPPED : alSourceStop(slave); break;
|
rlm@162
|
294 }
|
rlm@162
|
295 }
|
rlm@162
|
296 // Restore whatever context was previously active.
|
rlm@162
|
297 alcMakeContextCurrent(current);
|
rlm@162
|
298 }
|
rlm@162
|
299 #+end_src
|
rlm@162
|
300 This function is long because it has to exhaustively go through all the
|
rlm@162
|
301 possible state that a source can have and make sure that it is the
|
rlm@162
|
302 same between the master and slave sources. I'd like to take this
|
rlm@162
|
303 moment to salute the [[http://connect.creativelabs.com/openal/Documentation/Forms/AllItems.aspx][=OpenAL= Reference Manual]], which provides a very
|
rlm@162
|
304 good description of =OpenAL='s internals.
|
rlm@162
|
305
|
rlm@162
|
306 ** Context Synchronization
|
rlm@162
|
307 #+name: sync-contexts
|
rlm@162
|
308 #+begin_src C
|
rlm@162
|
309 void syncContexts(ALCcontext *master, ALCcontext *slave){
|
rlm@162
|
310 /* If there aren't sufficient sources in slave to mirror
|
rlm@162
|
311 the sources in master, create them. */
|
rlm@162
|
312 ALCcontext *current = alcGetCurrentContext();
|
rlm@162
|
313
|
rlm@162
|
314 UIntMap *masterSourceMap = &(master->SourceMap);
|
rlm@162
|
315 UIntMap *slaveSourceMap = &(slave->SourceMap);
|
rlm@162
|
316 ALuint numMasterSources = masterSourceMap->size;
|
rlm@162
|
317 ALuint numSlaveSources = slaveSourceMap->size;
|
rlm@162
|
318
|
rlm@162
|
319 alcMakeContextCurrent(slave);
|
rlm@162
|
320 if (numSlaveSources < numMasterSources){
|
rlm@162
|
321 ALuint numMissingSources = numMasterSources - numSlaveSources;
|
rlm@162
|
322 ALuint newSources[numMissingSources];
|
rlm@162
|
323 alGenSources(numMissingSources, newSources);
|
rlm@162
|
324 }
|
rlm@162
|
325
|
rlm@162
|
326 /* Now, slave is guaranteed to have at least as many sources
|
rlm@162
|
327 as master. Sync each source from master to the corresponding
|
rlm@162
|
328 source in slave. */
|
rlm@162
|
329 int i;
|
rlm@162
|
330 for(i = 0; i < masterSourceMap->size; i++){
|
rlm@162
|
331 syncSources((ALsource*)masterSourceMap->array[i].value,
|
rlm@162
|
332 (ALsource*)slaveSourceMap->array[i].value,
|
rlm@162
|
333 master, slave);
|
rlm@162
|
334 }
|
rlm@162
|
335 alcMakeContextCurrent(current);
|
rlm@162
|
336 }
|
rlm@162
|
337 #+end_src
|
rlm@162
|
338
|
rlm@162
|
339 Most of the hard work in Context Synchronization is done in
|
rlm@162
|
340 =syncSources()=. The only thing that =syncContexts()= has to worry
|
rlm@162
|
341 about is automatically creating new sources whenever a slave context
|
rlm@162
|
342 does not have the same number of sources as the master context.
|
rlm@162
|
343
|
rlm@162
|
344 ** Context Creation
|
rlm@162
|
345 #+name: context-creation
|
rlm@162
|
346 #+begin_src C
|
rlm@162
|
347 static void addContext(ALCdevice *Device, ALCcontext *context){
|
rlm@162
|
348 send_data *data = (send_data*)Device->ExtraData;
|
rlm@162
|
349 // expand array if necessary
|
rlm@162
|
350 if (data->numContexts >= data->maxContexts){
|
rlm@162
|
351 ALuint newMaxContexts = data->maxContexts*2 + 1;
|
rlm@162
|
352 data->contexts = realloc(data->contexts, newMaxContexts*sizeof(context_data));
|
rlm@162
|
353 data->maxContexts = newMaxContexts;
|
rlm@162
|
354 }
|
rlm@162
|
355 // create context_data and add it to the main array
|
rlm@162
|
356 context_data *ctxData;
|
rlm@162
|
357 ctxData = (context_data*)calloc(1, sizeof(*ctxData));
|
rlm@162
|
358 ctxData->renderBuffer =
|
rlm@162
|
359 malloc(BytesFromDevFmt(Device->FmtType) *
|
rlm@162
|
360 Device->NumChan * Device->UpdateSize);
|
rlm@162
|
361 ctxData->ctx = context;
|
rlm@162
|
362
|
rlm@162
|
363 data->contexts[data->numContexts] = ctxData;
|
rlm@162
|
364 data->numContexts++;
|
rlm@162
|
365 }
|
rlm@162
|
366 #+end_src
|
rlm@162
|
367
|
rlm@162
|
368 Here, the slave context is created, and it's data is stored in the
|
rlm@162
|
369 device-wide =ExtraData= structure. The =renderBuffer= that is created
|
rlm@162
|
370 here is where the rendered sound samples for this slave context will
|
rlm@162
|
371 eventually go.
|
rlm@162
|
372
|
rlm@162
|
373 ** Context Switching
|
rlm@162
|
374 #+name: context-switching
|
rlm@162
|
375 #+begin_src C
|
rlm@162
|
376 //////////////////// Context Switching
|
rlm@162
|
377
|
rlm@162
|
378 /* A device brings along with it two pieces of state
|
rlm@162
|
379 * which have to be swapped in and out with each context.
|
rlm@162
|
380 */
|
rlm@162
|
381 static void swapInContext(ALCdevice *Device, context_data *ctxData){
|
rlm@162
|
382 memcpy(Device->ClickRemoval, ctxData->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
|
rlm@162
|
383 memcpy(Device->PendingClicks, ctxData->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
|
rlm@162
|
384 }
|
rlm@162
|
385
|
rlm@162
|
386 static void saveContext(ALCdevice *Device, context_data *ctxData){
|
rlm@162
|
387 memcpy(ctxData->ClickRemoval, Device->ClickRemoval, sizeof(ALfloat)*MAXCHANNELS);
|
rlm@162
|
388 memcpy(ctxData->PendingClicks, Device->PendingClicks, sizeof(ALfloat)*MAXCHANNELS);
|
rlm@162
|
389 }
|
rlm@162
|
390
|
rlm@162
|
391 static ALCcontext **currentContext;
|
rlm@162
|
392 static ALuint currentNumContext;
|
rlm@162
|
393
|
rlm@162
|
394 /* By default, all contexts are rendered at once for each call to aluMixData.
|
rlm@162
|
395 * This function uses the internals of the ALCdevice struct to temporally
|
rlm@162
|
396 * cause aluMixData to only render the chosen context.
|
rlm@162
|
397 */
|
rlm@162
|
398 static void limitContext(ALCdevice *Device, ALCcontext *ctx){
|
rlm@162
|
399 currentContext = Device->Contexts;
|
rlm@162
|
400 currentNumContext = Device->NumContexts;
|
rlm@162
|
401 Device->Contexts = &ctx;
|
rlm@162
|
402 Device->NumContexts = 1;
|
rlm@162
|
403 }
|
rlm@162
|
404
|
rlm@162
|
405 static void unLimitContext(ALCdevice *Device){
|
rlm@162
|
406 Device->Contexts = currentContext;
|
rlm@162
|
407 Device->NumContexts = currentNumContext;
|
rlm@162
|
408 }
|
rlm@162
|
409 #+end_src
|
rlm@162
|
410
|
rlm@220
|
411 =OpenAL= normally renders all contexts in parallel, outputting the
|
rlm@162
|
412 whole result to the buffer. It does this by iterating over the
|
rlm@162
|
413 Device->Contexts array and rendering each context to the buffer in
|
rlm@162
|
414 turn. By temporally setting Device->NumContexts to 1 and adjusting
|
rlm@162
|
415 the Device's context list to put the desired context-to-be-rendered
|
rlm@220
|
416 into position 0, we can get trick =OpenAL= into rendering each context
|
rlm@220
|
417 separate from all the others.
|
rlm@162
|
418
|
rlm@162
|
419 ** Main Device Loop
|
rlm@162
|
420 #+name: main-loop
|
rlm@162
|
421 #+begin_src C
|
rlm@162
|
422 //////////////////// Main Device Loop
|
rlm@162
|
423
|
rlm@162
|
424 /* Establish the LWJGL context as the master context, which will
|
rlm@162
|
425 * be synchronized to all the slave contexts
|
rlm@162
|
426 */
|
rlm@162
|
427 static void init(ALCdevice *Device){
|
rlm@162
|
428 ALCcontext *masterContext = alcGetCurrentContext();
|
rlm@162
|
429 addContext(Device, masterContext);
|
rlm@162
|
430 }
|
rlm@162
|
431
|
rlm@162
|
432 static void renderData(ALCdevice *Device, int samples){
|
rlm@162
|
433 if(!Device->Connected){return;}
|
rlm@162
|
434 send_data *data = (send_data*)Device->ExtraData;
|
rlm@162
|
435 ALCcontext *current = alcGetCurrentContext();
|
rlm@162
|
436
|
rlm@162
|
437 ALuint i;
|
rlm@162
|
438 for (i = 1; i < data->numContexts; i++){
|
rlm@162
|
439 syncContexts(data->contexts[0]->ctx , data->contexts[i]->ctx);
|
rlm@162
|
440 }
|
rlm@162
|
441
|
rlm@162
|
442 if ((ALuint) samples > Device->UpdateSize){
|
rlm@162
|
443 printf("exceeding internal buffer size; dropping samples\n");
|
rlm@162
|
444 printf("requested %d; available %d\n", samples, Device->UpdateSize);
|
rlm@162
|
445 samples = (int) Device->UpdateSize;
|
rlm@162
|
446 }
|
rlm@162
|
447
|
rlm@162
|
448 for (i = 0; i < data->numContexts; i++){
|
rlm@162
|
449 context_data *ctxData = data->contexts[i];
|
rlm@162
|
450 ALCcontext *ctx = ctxData->ctx;
|
rlm@162
|
451 alcMakeContextCurrent(ctx);
|
rlm@162
|
452 limitContext(Device, ctx);
|
rlm@162
|
453 swapInContext(Device, ctxData);
|
rlm@162
|
454 aluMixData(Device, ctxData->renderBuffer, samples);
|
rlm@162
|
455 saveContext(Device, ctxData);
|
rlm@162
|
456 unLimitContext(Device);
|
rlm@162
|
457 }
|
rlm@162
|
458 alcMakeContextCurrent(current);
|
rlm@162
|
459 }
|
rlm@162
|
460 #+end_src
|
rlm@162
|
461
|
rlm@162
|
462 The main loop synchronizes the master LWJGL context with all the slave
|
rlm@220
|
463 contexts, then iterates through each context, rendering just that
|
rlm@220
|
464 context to it's audio-sample storage buffer.
|
rlm@162
|
465
|
rlm@162
|
466 ** JNI Methods
|
rlm@162
|
467
|
rlm@162
|
468 At this point, we have the ability to create multiple listeners by
|
rlm@162
|
469 using the master/slave context trick, and the rendered audio data is
|
rlm@162
|
470 waiting patiently in internal buffers, one for each listener. We need
|
rlm@162
|
471 a way to transport this information to Java, and also a way to drive
|
rlm@162
|
472 this device from Java. The following JNI interface code is inspired
|
rlm@220
|
473 by the LWJGL JNI interface to =OpenAL=.
|
rlm@162
|
474
|
rlm@220
|
475 *** Stepping the Device
|
rlm@162
|
476 #+name: jni-step
|
rlm@162
|
477 #+begin_src C
|
rlm@162
|
478 //////////////////// JNI Methods
|
rlm@162
|
479
|
rlm@162
|
480 #include "com_aurellem_send_AudioSend.h"
|
rlm@162
|
481
|
rlm@162
|
482 /*
|
rlm@162
|
483 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
484 * Method: nstep
|
rlm@162
|
485 * Signature: (JI)V
|
rlm@162
|
486 */
|
rlm@162
|
487 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nstep
|
rlm@162
|
488 (JNIEnv *env, jclass clazz, jlong device, jint samples){
|
rlm@162
|
489 UNUSED(env);UNUSED(clazz);UNUSED(device);
|
rlm@162
|
490 renderData((ALCdevice*)((intptr_t)device), samples);
|
rlm@162
|
491 }
|
rlm@162
|
492 #+end_src
|
rlm@162
|
493 This device, unlike most of the other devices in =OpenAL=, does not
|
rlm@162
|
494 render sound unless asked. This enables the system to slow down or
|
rlm@162
|
495 speed up depending on the needs of the AIs who are using it to
|
rlm@162
|
496 listen. If the device tried to render samples in real-time, a
|
rlm@162
|
497 complicated AI whose mind takes 100 seconds of computer time to
|
rlm@162
|
498 simulate 1 second of AI-time would miss almost all of the sound in
|
rlm@162
|
499 its environment.
|
rlm@162
|
500
|
rlm@162
|
501
|
rlm@220
|
502 *** Device->Java Data Transport
|
rlm@162
|
503 #+name: jni-get-samples
|
rlm@162
|
504 #+begin_src C
|
rlm@162
|
505 /*
|
rlm@162
|
506 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
507 * Method: ngetSamples
|
rlm@162
|
508 * Signature: (JLjava/nio/ByteBuffer;III)V
|
rlm@162
|
509 */
|
rlm@162
|
510 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ngetSamples
|
rlm@162
|
511 (JNIEnv *env, jclass clazz, jlong device, jobject buffer, jint position,
|
rlm@162
|
512 jint samples, jint n){
|
rlm@162
|
513 UNUSED(clazz);
|
rlm@162
|
514
|
rlm@162
|
515 ALvoid *buffer_address =
|
rlm@162
|
516 ((ALbyte *)(((char*)(*env)->GetDirectBufferAddress(env, buffer)) + position));
|
rlm@162
|
517 ALCdevice *recorder = (ALCdevice*) ((intptr_t)device);
|
rlm@162
|
518 send_data *data = (send_data*)recorder->ExtraData;
|
rlm@162
|
519 if ((ALuint)n > data->numContexts){return;}
|
rlm@162
|
520 memcpy(buffer_address, data->contexts[n]->renderBuffer,
|
rlm@162
|
521 BytesFromDevFmt(recorder->FmtType) * recorder->NumChan * samples);
|
rlm@162
|
522 }
|
rlm@162
|
523 #+end_src
|
rlm@162
|
524
|
rlm@162
|
525 This is the transport layer between C and Java that will eventually
|
rlm@162
|
526 allow us to access rendered sound data from clojure.
|
rlm@162
|
527
|
rlm@162
|
528 *** Listener Management
|
rlm@162
|
529
|
rlm@162
|
530 =addListener=, =setNthListenerf=, and =setNthListener3f= are
|
rlm@162
|
531 necessary to change the properties of any listener other than the
|
rlm@162
|
532 master one, since only the listener of the current active context is
|
rlm@162
|
533 affected by the normal =OpenAL= listener calls.
|
rlm@162
|
534 #+name: listener-manage
|
rlm@162
|
535 #+begin_src C
|
rlm@162
|
536 /*
|
rlm@162
|
537 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
538 * Method: naddListener
|
rlm@162
|
539 * Signature: (J)V
|
rlm@162
|
540 */
|
rlm@162
|
541 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_naddListener
|
rlm@162
|
542 (JNIEnv *env, jclass clazz, jlong device){
|
rlm@162
|
543 UNUSED(env); UNUSED(clazz);
|
rlm@162
|
544 //printf("creating new context via naddListener\n");
|
rlm@162
|
545 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
|
rlm@162
|
546 ALCcontext *new = alcCreateContext(Device, NULL);
|
rlm@162
|
547 addContext(Device, new);
|
rlm@162
|
548 }
|
rlm@162
|
549
|
rlm@162
|
550 /*
|
rlm@162
|
551 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
552 * Method: nsetNthListener3f
|
rlm@162
|
553 * Signature: (IFFFJI)V
|
rlm@162
|
554 */
|
rlm@162
|
555 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListener3f
|
rlm@162
|
556 (JNIEnv *env, jclass clazz, jint param,
|
rlm@162
|
557 jfloat v1, jfloat v2, jfloat v3, jlong device, jint contextNum){
|
rlm@162
|
558 UNUSED(env);UNUSED(clazz);
|
rlm@162
|
559
|
rlm@162
|
560 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
|
rlm@162
|
561 send_data *data = (send_data*)Device->ExtraData;
|
rlm@162
|
562
|
rlm@162
|
563 ALCcontext *current = alcGetCurrentContext();
|
rlm@162
|
564 if ((ALuint)contextNum > data->numContexts){return;}
|
rlm@162
|
565 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
|
rlm@162
|
566 alListener3f(param, v1, v2, v3);
|
rlm@162
|
567 alcMakeContextCurrent(current);
|
rlm@162
|
568 }
|
rlm@162
|
569
|
rlm@162
|
570 /*
|
rlm@162
|
571 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
572 * Method: nsetNthListenerf
|
rlm@162
|
573 * Signature: (IFJI)V
|
rlm@162
|
574 */
|
rlm@162
|
575 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_nsetNthListenerf
|
rlm@162
|
576 (JNIEnv *env, jclass clazz, jint param, jfloat v1, jlong device,
|
rlm@162
|
577 jint contextNum){
|
rlm@162
|
578
|
rlm@162
|
579 UNUSED(env);UNUSED(clazz);
|
rlm@162
|
580
|
rlm@162
|
581 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
|
rlm@162
|
582 send_data *data = (send_data*)Device->ExtraData;
|
rlm@162
|
583
|
rlm@162
|
584 ALCcontext *current = alcGetCurrentContext();
|
rlm@162
|
585 if ((ALuint)contextNum > data->numContexts){return;}
|
rlm@162
|
586 alcMakeContextCurrent(data->contexts[contextNum]->ctx);
|
rlm@162
|
587 alListenerf(param, v1);
|
rlm@162
|
588 alcMakeContextCurrent(current);
|
rlm@162
|
589 }
|
rlm@162
|
590 #+end_src
|
rlm@162
|
591
|
rlm@162
|
592 *** Initialization
|
rlm@162
|
593 =initDevice= is called from the Java side after LWJGL has created its
|
rlm@162
|
594 context, and before any calls to =addListener=. It establishes the
|
rlm@162
|
595 LWJGL context as the master context.
|
rlm@162
|
596
|
rlm@162
|
597 =getAudioFormat= is a convenience function that uses JNI to build up a
|
rlm@162
|
598 =javax.sound.sampled.AudioFormat= object from data in the Device. This
|
rlm@162
|
599 way, there is no ambiguity about what the bits created by =step= and
|
rlm@162
|
600 returned by =getSamples= mean.
|
rlm@162
|
601 #+name: jni-init
|
rlm@162
|
602 #+begin_src C
|
rlm@162
|
603 /*
|
rlm@162
|
604 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
605 * Method: ninitDevice
|
rlm@162
|
606 * Signature: (J)V
|
rlm@162
|
607 */
|
rlm@162
|
608 JNIEXPORT void JNICALL Java_com_aurellem_send_AudioSend_ninitDevice
|
rlm@162
|
609 (JNIEnv *env, jclass clazz, jlong device){
|
rlm@162
|
610 UNUSED(env);UNUSED(clazz);
|
rlm@162
|
611 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
|
rlm@162
|
612 init(Device);
|
rlm@162
|
613 }
|
rlm@162
|
614
|
rlm@162
|
615 /*
|
rlm@162
|
616 * Class: com_aurellem_send_AudioSend
|
rlm@162
|
617 * Method: ngetAudioFormat
|
rlm@162
|
618 * Signature: (J)Ljavax/sound/sampled/AudioFormat;
|
rlm@162
|
619 */
|
rlm@162
|
620 JNIEXPORT jobject JNICALL Java_com_aurellem_send_AudioSend_ngetAudioFormat
|
rlm@162
|
621 (JNIEnv *env, jclass clazz, jlong device){
|
rlm@162
|
622 UNUSED(clazz);
|
rlm@162
|
623 jclass AudioFormatClass =
|
rlm@162
|
624 (*env)->FindClass(env, "javax/sound/sampled/AudioFormat");
|
rlm@162
|
625 jmethodID AudioFormatConstructor =
|
rlm@162
|
626 (*env)->GetMethodID(env, AudioFormatClass, "<init>", "(FIIZZ)V");
|
rlm@162
|
627
|
rlm@162
|
628 ALCdevice *Device = (ALCdevice*) ((intptr_t)device);
|
rlm@162
|
629 int isSigned;
|
rlm@162
|
630 switch (Device->FmtType)
|
rlm@162
|
631 {
|
rlm@162
|
632 case DevFmtUByte:
|
rlm@162
|
633 case DevFmtUShort: isSigned = 0; break;
|
rlm@162
|
634 default : isSigned = 1;
|
rlm@162
|
635 }
|
rlm@162
|
636 float frequency = Device->Frequency;
|
rlm@162
|
637 int bitsPerFrame = (8 * BytesFromDevFmt(Device->FmtType));
|
rlm@162
|
638 int channels = Device->NumChan;
|
rlm@162
|
639 jobject format = (*env)->
|
rlm@162
|
640 NewObject(
|
rlm@162
|
641 env,AudioFormatClass,AudioFormatConstructor,
|
rlm@162
|
642 frequency,
|
rlm@162
|
643 bitsPerFrame,
|
rlm@162
|
644 channels,
|
rlm@162
|
645 isSigned,
|
rlm@162
|
646 0);
|
rlm@162
|
647 return format;
|
rlm@162
|
648 }
|
rlm@162
|
649 #+end_src
|
rlm@162
|
650
|
rlm@220
|
651 ** Boring Device Management Stuff / Memory Cleanup
|
rlm@162
|
652 This code is more-or-less copied verbatim from the other =OpenAL=
|
rlm@220
|
653 Devices. It's the basis for =OpenAL='s primitive object system.
|
rlm@162
|
654 #+name: device-init
|
rlm@162
|
655 #+begin_src C
|
rlm@162
|
656 //////////////////// Device Initialization / Management
|
rlm@162
|
657
|
rlm@162
|
658 static const ALCchar sendDevice[] = "Multiple Audio Send";
|
rlm@162
|
659
|
rlm@162
|
660 static ALCboolean send_open_playback(ALCdevice *device,
|
rlm@162
|
661 const ALCchar *deviceName)
|
rlm@162
|
662 {
|
rlm@162
|
663 send_data *data;
|
rlm@162
|
664 // stop any buffering for stdout, so that I can
|
rlm@162
|
665 // see the printf statements in my terminal immediately
|
rlm@162
|
666 setbuf(stdout, NULL);
|
rlm@162
|
667
|
rlm@162
|
668 if(!deviceName)
|
rlm@162
|
669 deviceName = sendDevice;
|
rlm@162
|
670 else if(strcmp(deviceName, sendDevice) != 0)
|
rlm@162
|
671 return ALC_FALSE;
|
rlm@162
|
672 data = (send_data*)calloc(1, sizeof(*data));
|
rlm@162
|
673 device->szDeviceName = strdup(deviceName);
|
rlm@162
|
674 device->ExtraData = data;
|
rlm@162
|
675 return ALC_TRUE;
|
rlm@162
|
676 }
|
rlm@162
|
677
|
rlm@162
|
678 static void send_close_playback(ALCdevice *device)
|
rlm@162
|
679 {
|
rlm@162
|
680 send_data *data = (send_data*)device->ExtraData;
|
rlm@162
|
681 alcMakeContextCurrent(NULL);
|
rlm@162
|
682 ALuint i;
|
rlm@162
|
683 // Destroy all slave contexts. LWJGL will take care of
|
rlm@162
|
684 // its own context.
|
rlm@162
|
685 for (i = 1; i < data->numContexts; i++){
|
rlm@162
|
686 context_data *ctxData = data->contexts[i];
|
rlm@162
|
687 alcDestroyContext(ctxData->ctx);
|
rlm@162
|
688 free(ctxData->renderBuffer);
|
rlm@162
|
689 free(ctxData);
|
rlm@162
|
690 }
|
rlm@162
|
691 free(data);
|
rlm@162
|
692 device->ExtraData = NULL;
|
rlm@162
|
693 }
|
rlm@162
|
694
|
rlm@162
|
695 static ALCboolean send_reset_playback(ALCdevice *device)
|
rlm@162
|
696 {
|
rlm@162
|
697 SetDefaultWFXChannelOrder(device);
|
rlm@162
|
698 return ALC_TRUE;
|
rlm@162
|
699 }
|
rlm@162
|
700
|
rlm@162
|
701 static void send_stop_playback(ALCdevice *Device){
|
rlm@162
|
702 UNUSED(Device);
|
rlm@162
|
703 }
|
rlm@162
|
704
|
rlm@162
|
705 static const BackendFuncs send_funcs = {
|
rlm@162
|
706 send_open_playback,
|
rlm@162
|
707 send_close_playback,
|
rlm@162
|
708 send_reset_playback,
|
rlm@162
|
709 send_stop_playback,
|
rlm@162
|
710 NULL,
|
rlm@162
|
711 NULL, /* These would be filled with functions to */
|
rlm@162
|
712 NULL, /* handle capturing audio if we we into that */
|
rlm@162
|
713 NULL, /* sort of thing... */
|
rlm@162
|
714 NULL,
|
rlm@162
|
715 NULL
|
rlm@162
|
716 };
|
rlm@162
|
717
|
rlm@162
|
718 ALCboolean alc_send_init(BackendFuncs *func_list){
|
rlm@162
|
719 *func_list = send_funcs;
|
rlm@162
|
720 return ALC_TRUE;
|
rlm@162
|
721 }
|
rlm@162
|
722
|
rlm@162
|
723 void alc_send_deinit(void){}
|
rlm@162
|
724
|
rlm@162
|
725 void alc_send_probe(enum DevProbe type)
|
rlm@162
|
726 {
|
rlm@162
|
727 switch(type)
|
rlm@162
|
728 {
|
rlm@162
|
729 case DEVICE_PROBE:
|
rlm@162
|
730 AppendDeviceList(sendDevice);
|
rlm@162
|
731 break;
|
rlm@162
|
732 case ALL_DEVICE_PROBE:
|
rlm@162
|
733 AppendAllDeviceList(sendDevice);
|
rlm@162
|
734 break;
|
rlm@162
|
735 case CAPTURE_DEVICE_PROBE:
|
rlm@162
|
736 break;
|
rlm@162
|
737 }
|
rlm@162
|
738 }
|
rlm@162
|
739 #+end_src
|
rlm@162
|
740
|
rlm@162
|
741 * The Java interface, =AudioSend=
|
rlm@162
|
742
|
rlm@162
|
743 The Java interface to the Send Device follows naturally from the JNI
|
rlm@220
|
744 definitions. The only thing here of note is the =deviceID=. This is
|
rlm@220
|
745 available from LWJGL, but to only way to get it is with reflection.
|
rlm@220
|
746 Unfortunately, there is no other way to control the Send device than
|
rlm@220
|
747 to obtain a pointer to it.
|
rlm@162
|
748
|
rlm@220
|
749 #+include: "../../audio-send/java/src/com/aurellem/send/AudioSend.java" src java
|
rlm@220
|
750
|
rlm@220
|
751 * The Java Audio Renderer, =AudioSendRenderer=
|
rlm@220
|
752
|
rlm@220
|
753 #+include: "../../jmeCapture/src/com/aurellem/capture/audio/AudioSendRenderer.java" src java
|
rlm@220
|
754
|
rlm@220
|
755 The =AudioSendRenderer= is a modified version of the
|
rlm@220
|
756 =LwjglAudioRenderer= which implements the =MultiListener= interface to
|
rlm@220
|
757 provide access and creation of more than one =Listener= object.
|
rlm@220
|
758
|
rlm@220
|
759 ** MultiListener.java
|
rlm@220
|
760
|
rlm@220
|
761 #+include: "../../jmeCapture/src/com/aurellem/capture/audio/MultiListener.java" src java
|
rlm@220
|
762
|
rlm@220
|
763 ** SoundProcessors are like SceneProcessors
|
rlm@220
|
764
|
rlm@306
|
765 A =SoundProcessor= is analogous to a =SceneProcessor=. Every frame, the
|
rlm@306
|
766 =SoundProcessor= registered with a given =Listener= receives the
|
rlm@220
|
767 rendered sound data and can do whatever processing it wants with it.
|
rlm@220
|
768
|
rlm@220
|
769 #+include "../../jmeCapture/src/com/aurellem/capture/audio/SoundProcessor.java" src java
|
rlm@162
|
770
|
rlm@162
|
771 * Finally, Ears in clojure!
|
rlm@162
|
772
|
rlm@220
|
773 Now that the =C= and =Java= infrastructure is complete, the clojure
|
rlm@220
|
774 hearing abstraction is simple and closely parallels the [[./vision.org][vision]]
|
rlm@220
|
775 abstraction.
|
rlm@162
|
776
|
rlm@220
|
777 ** Hearing Pipeline
|
rlm@162
|
778
|
rlm@273
|
779 All sound rendering is done in the CPU, so =hearing-pipeline= is
|
rlm@306
|
780 much less complicated than =vision-pipeline= The bytes available in
|
rlm@220
|
781 the ByteBuffer obtained from the =send= Device have different meanings
|
rlm@306
|
782 dependent upon the particular hardware or your system. That is why
|
rlm@220
|
783 the =AudioFormat= object is necessary to provide the meaning that the
|
rlm@273
|
784 raw bytes lack. =byteBuffer->pulse-vector= uses the excellent
|
rlm@220
|
785 conversion facilities from [[http://www.tritonus.org/ ][tritonus]] ([[http://tritonus.sourceforge.net/apidoc/org/tritonus/share/sampled/FloatSampleTools.html#byte2floatInterleaved%2528byte%5B%5D,%2520int,%2520float%5B%5D,%2520int,%2520int,%2520javax.sound.sampled.AudioFormat%2529][javadoc]]) to generate a clojure vector of
|
rlm@220
|
786 floats which represent the linear PCM encoded waveform of the
|
rlm@220
|
787 sound. With linear PCM (pulse code modulation) -1.0 represents maximum
|
rlm@220
|
788 rarefaction of the air while 1.0 represents maximum compression of the
|
rlm@220
|
789 air at a given instant.
|
rlm@164
|
790
|
rlm@221
|
791 #+name: hearing-pipeline
|
rlm@162
|
792 #+begin_src clojure
|
rlm@220
|
793 (in-ns 'cortex.hearing)
|
rlm@162
|
794
|
rlm@220
|
795 (defn hearing-pipeline
|
rlm@220
|
796 "Creates a SoundProcessor which wraps a sound processing
|
rlm@220
|
797 continuation function. The continuation is a function that takes
|
rlm@220
|
798 [#^ByteBuffer b #^Integer int numSamples #^AudioFormat af ], each of which
|
rlm@306
|
799 has already been appropriately sized."
|
rlm@162
|
800 [continuation]
|
rlm@162
|
801 (proxy [SoundProcessor] []
|
rlm@162
|
802 (cleanup [])
|
rlm@162
|
803 (process
|
rlm@162
|
804 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
|
rlm@220
|
805 (continuation audioSamples numSamples audioFormat))))
|
rlm@162
|
806
|
rlm@220
|
807 (defn byteBuffer->pulse-vector
|
rlm@220
|
808 "Extract the sound samples from the byteBuffer as a PCM encoded
|
rlm@220
|
809 waveform with values ranging from -1.0 to 1.0 into a vector of
|
rlm@220
|
810 floats."
|
rlm@220
|
811 [#^ByteBuffer audioSamples numSamples #^AudioFormat audioFormat]
|
rlm@220
|
812 (let [num-floats (/ numSamples (.getFrameSize audioFormat))
|
rlm@220
|
813 bytes (byte-array numSamples)
|
rlm@220
|
814 floats (float-array num-floats)]
|
rlm@220
|
815 (.get audioSamples bytes 0 numSamples)
|
rlm@220
|
816 (FloatSampleTools/byte2floatInterleaved
|
rlm@220
|
817 bytes 0 floats 0 num-floats audioFormat)
|
rlm@220
|
818 (vec floats)))
|
rlm@220
|
819 #+end_src
|
rlm@220
|
820
|
rlm@220
|
821 ** Physical Ears
|
rlm@220
|
822
|
rlm@220
|
823 Together, these three functions define how ears found in a specially
|
rlm@220
|
824 prepared blender file will be translated to =Listener= objects in a
|
rlm@273
|
825 simulation. =ears= extracts all the children of to top level node
|
rlm@472
|
826 named "ears". =add-ear!= and =update-listener-velocity!= use
|
rlm@273
|
827 =bind-sense= to bind a =Listener= object located at the initial
|
rlm@220
|
828 position of an "ear" node to the closest physical object in the
|
rlm@220
|
829 creature. That =Listener= will stay in the same orientation to the
|
rlm@220
|
830 object with which it is bound, just as the camera in the [[http://aurellem.localhost/cortex/html/sense.html#sec-4-1][sense binding
|
rlm@472
|
831 demonstration]]. =OpenAL= simulates the Doppler effect for moving
|
rlm@273
|
832 listeners, =update-listener-velocity!= ensures that this velocity
|
rlm@220
|
833 information is always up-to-date.
|
rlm@220
|
834
|
rlm@221
|
835 #+name: hearing-ears
|
rlm@220
|
836 #+begin_src clojure
|
rlm@317
|
837 (def
|
rlm@317
|
838 ^{:doc "Return the children of the creature's \"ears\" node."
|
rlm@317
|
839 :arglists '([creature])}
|
rlm@164
|
840 ears
|
rlm@317
|
841 (sense-nodes "ears"))
|
rlm@317
|
842
|
rlm@162
|
843
|
rlm@163
|
844 (defn update-listener-velocity!
|
rlm@162
|
845 "Update the listener's velocity every update loop."
|
rlm@162
|
846 [#^Spatial obj #^Listener lis]
|
rlm@162
|
847 (let [old-position (atom (.getLocation lis))]
|
rlm@162
|
848 (.addControl
|
rlm@162
|
849 obj
|
rlm@162
|
850 (proxy [AbstractControl] []
|
rlm@162
|
851 (controlUpdate [tpf]
|
rlm@162
|
852 (let [new-position (.getLocation lis)]
|
rlm@162
|
853 (.setVelocity
|
rlm@162
|
854 lis
|
rlm@162
|
855 (.mult (.subtract new-position @old-position)
|
rlm@162
|
856 (float (/ tpf))))
|
rlm@162
|
857 (reset! old-position new-position)))
|
rlm@162
|
858 (controlRender [_ _])))))
|
rlm@162
|
859
|
rlm@169
|
860 (defn add-ear!
|
rlm@164
|
861 "Create a Listener centered on the current position of 'ear
|
rlm@164
|
862 which follows the closest physical node in 'creature and
|
rlm@164
|
863 sends sound data to 'continuation."
|
rlm@162
|
864 [#^Application world #^Node creature #^Spatial ear continuation]
|
rlm@162
|
865 (let [target (closest-node creature ear)
|
rlm@162
|
866 lis (Listener.)
|
rlm@162
|
867 audio-renderer (.getAudioRenderer world)
|
rlm@220
|
868 sp (hearing-pipeline continuation)]
|
rlm@162
|
869 (.setLocation lis (.getWorldTranslation ear))
|
rlm@162
|
870 (.setRotation lis (.getWorldRotation ear))
|
rlm@162
|
871 (bind-sense target lis)
|
rlm@163
|
872 (update-listener-velocity! target lis)
|
rlm@162
|
873 (.addListener audio-renderer lis)
|
rlm@162
|
874 (.registerSoundProcessor audio-renderer lis sp)))
|
rlm@220
|
875 #+end_src
|
rlm@162
|
876
|
rlm@220
|
877 ** Ear Creation
|
rlm@220
|
878
|
rlm@221
|
879 #+name: hearing-kernel
|
rlm@220
|
880 #+begin_src clojure
|
rlm@220
|
881 (defn hearing-kernel
|
rlm@306
|
882 "Returns a function which returns auditory sensory data when called
|
rlm@164
|
883 inside a running simulation."
|
rlm@162
|
884 [#^Node creature #^Spatial ear]
|
rlm@164
|
885 (let [hearing-data (atom [])
|
rlm@164
|
886 register-listener!
|
rlm@164
|
887 (runonce
|
rlm@164
|
888 (fn [#^Application world]
|
rlm@169
|
889 (add-ear!
|
rlm@164
|
890 world creature ear
|
rlm@220
|
891 (comp #(reset! hearing-data %)
|
rlm@220
|
892 byteBuffer->pulse-vector))))]
|
rlm@164
|
893 (fn [#^Application world]
|
rlm@164
|
894 (register-listener! world)
|
rlm@164
|
895 (let [data @hearing-data
|
rlm@164
|
896 topology
|
rlm@220
|
897 (vec (map #(vector % 0) (range 0 (count data))))]
|
rlm@220
|
898 [topology data]))))
|
rlm@164
|
899
|
rlm@163
|
900 (defn hearing!
|
rlm@164
|
901 "Endow the creature in a particular world with the sense of
|
rlm@164
|
902 hearing. Will return a sequence of functions, one for each ear,
|
rlm@164
|
903 which when called will return the auditory data from that ear."
|
rlm@162
|
904 [#^Node creature]
|
rlm@164
|
905 (for [ear (ears creature)]
|
rlm@220
|
906 (hearing-kernel creature ear)))
|
rlm@220
|
907 #+end_src
|
rlm@162
|
908
|
rlm@273
|
909 Each function returned by =hearing-kernel!= will register a new
|
rlm@220
|
910 =Listener= with the simulation the first time it is called. Each time
|
rlm@220
|
911 it is called, the hearing-function will return a vector of linear PCM
|
rlm@220
|
912 encoded sound data that was heard since the last frame. The size of
|
rlm@220
|
913 this vector is of course determined by the overall framerate of the
|
rlm@220
|
914 game. With a constant framerate of 60 frames per second and a sampling
|
rlm@220
|
915 frequency of 44,100 samples per second, the vector will have exactly
|
rlm@220
|
916 735 elements.
|
rlm@220
|
917
|
rlm@220
|
918 ** Visualizing Hearing
|
rlm@220
|
919
|
rlm@306
|
920 This is a simple visualization function which displays the waveform
|
rlm@220
|
921 reported by the simulated sense of hearing. It converts the values
|
rlm@220
|
922 reported in the vector returned by the hearing function from the range
|
rlm@220
|
923 [-1.0, 1.0] to the range [0 255], converts to integer, and displays
|
rlm@220
|
924 the number as a greyscale pixel.
|
rlm@220
|
925
|
rlm@221
|
926 #+name: hearing-display
|
rlm@220
|
927 #+begin_src clojure
|
rlm@221
|
928 (in-ns 'cortex.hearing)
|
rlm@221
|
929
|
rlm@189
|
930 (defn view-hearing
|
rlm@189
|
931 "Creates a function which accepts a list of auditory data and
|
rlm@189
|
932 display each element of the list to the screen as an image."
|
rlm@189
|
933 []
|
rlm@189
|
934 (view-sense
|
rlm@189
|
935 (fn [[coords sensor-data]]
|
rlm@220
|
936 (let [pixel-data
|
rlm@220
|
937 (vec
|
rlm@220
|
938 (map
|
rlm@220
|
939 #(rem (int (* 255 (/ (+ 1 %) 2))) 256)
|
rlm@220
|
940 sensor-data))
|
rlm@220
|
941 height 50
|
rlm@221
|
942 image (BufferedImage. (max 1 (count coords)) height
|
rlm@189
|
943 BufferedImage/TYPE_INT_RGB)]
|
rlm@189
|
944 (dorun
|
rlm@189
|
945 (for [x (range (count coords))]
|
rlm@189
|
946 (dorun
|
rlm@189
|
947 (for [y (range height)]
|
rlm@220
|
948 (let [raw-sensor (pixel-data x)]
|
rlm@189
|
949 (.setRGB image x y (gray raw-sensor)))))))
|
rlm@189
|
950 image))))
|
rlm@162
|
951 #+end_src
|
rlm@162
|
952
|
rlm@220
|
953 * Testing Hearing
|
rlm@220
|
954 ** Advanced Java Example
|
rlm@220
|
955
|
rlm@220
|
956 I wrote a test case in Java that demonstrates the use of the Java
|
rlm@220
|
957 components of this hearing system. It is part of a larger java library
|
rlm@220
|
958 to capture perfect Audio from jMonkeyEngine. Some of the clojure
|
rlm@220
|
959 constructs above are partially reiterated in the java source file. But
|
rlm@220
|
960 first, the video! As far as I know this is the first instance of
|
rlm@220
|
961 multiple simulated listeners in a virtual environment using OpenAL.
|
rlm@220
|
962
|
rlm@220
|
963 #+begin_html
|
rlm@220
|
964 <div class="figure">
|
rlm@220
|
965 <center>
|
rlm@220
|
966 <video controls="controls" width="500">
|
rlm@220
|
967 <source src="../video/java-hearing-test.ogg" type="video/ogg"
|
rlm@220
|
968 preload="none" poster="../images/aurellem-1280x480.png" />
|
rlm@220
|
969 </video>
|
rlm@309
|
970 <br> <a href="http://www.youtube.com/watch?v=oCEfK0yhDrY"> YouTube </a>
|
rlm@220
|
971 </center>
|
rlm@224
|
972 <p>The blue sphere is emitting a constant sound. Each gray box is
|
rlm@224
|
973 listening for sound, and will change color from gray to green if it
|
rlm@220
|
974 detects sound which is louder than a certain threshold. As the blue
|
rlm@220
|
975 sphere travels along the path, it excites each of the cubes in turn.</p>
|
rlm@220
|
976 </div>
|
rlm@220
|
977 #+end_html
|
rlm@220
|
978
|
rlm@328
|
979 #+include: "../../jmeCapture/src/com/aurellem/capture/examples/Advanced.java" src java
|
rlm@220
|
980
|
rlm@220
|
981 Here is a small clojure program to drive the java program and make it
|
rlm@220
|
982 available as part of my test suite.
|
rlm@162
|
983
|
rlm@221
|
984 #+name: test-hearing-1
|
rlm@220
|
985 #+begin_src clojure
|
rlm@220
|
986 (in-ns 'cortex.test.hearing)
|
rlm@162
|
987
|
rlm@220
|
988 (defn test-java-hearing
|
rlm@162
|
989 "Testing hearing:
|
rlm@162
|
990 You should see a blue sphere flying around several
|
rlm@162
|
991 cubes. As the sphere approaches each cube, it turns
|
rlm@162
|
992 green."
|
rlm@162
|
993 []
|
rlm@162
|
994 (doto (com.aurellem.capture.examples.Advanced.)
|
rlm@162
|
995 (.setSettings
|
rlm@162
|
996 (doto (AppSettings. true)
|
rlm@162
|
997 (.setAudioRenderer "Send")))
|
rlm@162
|
998 (.setShowSettings false)
|
rlm@162
|
999 (.setPauseOnLostFocus false)))
|
rlm@162
|
1000 #+end_src
|
rlm@162
|
1001
|
rlm@220
|
1002 ** Adding Hearing to the Worm
|
rlm@162
|
1003
|
rlm@221
|
1004 To the worm, I add a new node called "ears" with one child which
|
rlm@221
|
1005 represents the worm's single ear.
|
rlm@220
|
1006
|
rlm@221
|
1007 #+attr_html: width=755
|
rlm@221
|
1008 #+caption: The Worm with a newly added nodes describing an ear.
|
rlm@221
|
1009 [[../images/worm-with-ear.png]]
|
rlm@221
|
1010
|
rlm@221
|
1011 The node highlighted in yellow it the top-level "ears" node. It's
|
rlm@221
|
1012 child, highlighted in orange, represents a the single ear the creature
|
rlm@221
|
1013 has. The ear will be localized right above the curved part of the
|
rlm@221
|
1014 worm's lower hemispherical region opposite the eye.
|
rlm@221
|
1015
|
rlm@221
|
1016 The other empty nodes represent the worm's single joint and eye and are
|
rlm@221
|
1017 described in [[./body.org][body]] and [[./vision.org][vision]].
|
rlm@221
|
1018
|
rlm@221
|
1019 #+name: test-hearing-2
|
rlm@221
|
1020 #+begin_src clojure
|
rlm@221
|
1021 (in-ns 'cortex.test.hearing)
|
rlm@221
|
1022
|
rlm@283
|
1023 (defn test-worm-hearing
|
rlm@321
|
1024 "Testing hearing:
|
rlm@321
|
1025 You will see the worm fall onto a table. There is a long
|
rlm@321
|
1026 horizontal bar which shows the waveform of whatever the worm is
|
rlm@321
|
1027 hearing. When you play a sound, the bar should display a waveform.
|
rlm@321
|
1028
|
rlm@321
|
1029 Keys:
|
rlm@340
|
1030 <enter> : play sound
|
rlm@340
|
1031 l : play hymn"
|
rlm@283
|
1032 ([] (test-worm-hearing false))
|
rlm@283
|
1033 ([record?]
|
rlm@283
|
1034 (let [the-worm (doto (worm) (body!))
|
rlm@283
|
1035 hearing (hearing! the-worm)
|
rlm@283
|
1036 hearing-display (view-hearing)
|
rlm@283
|
1037
|
rlm@283
|
1038 tone (AudioNode. (asset-manager)
|
rlm@283
|
1039 "Sounds/pure.wav" false)
|
rlm@283
|
1040
|
rlm@283
|
1041 hymn (AudioNode. (asset-manager)
|
rlm@283
|
1042 "Sounds/ear-and-eye.wav" false)]
|
rlm@283
|
1043 (world
|
rlm@283
|
1044 (nodify [the-worm (floor)])
|
rlm@283
|
1045 (merge standard-debug-controls
|
rlm@283
|
1046 {"key-return"
|
rlm@283
|
1047 (fn [_ value]
|
rlm@283
|
1048 (if value (.play tone)))
|
rlm@283
|
1049 "key-l"
|
rlm@283
|
1050 (fn [_ value]
|
rlm@283
|
1051 (if value (.play hymn)))})
|
rlm@283
|
1052 (fn [world]
|
rlm@283
|
1053 (light-up-everything world)
|
rlm@340
|
1054 (let [timer (IsoTimer. 60)]
|
rlm@340
|
1055 (.setTimer world timer)
|
rlm@340
|
1056 (display-dilated-time world timer))
|
rlm@283
|
1057 (if record?
|
rlm@283
|
1058 (do
|
rlm@283
|
1059 (com.aurellem.capture.Capture/captureVideo
|
rlm@283
|
1060 world
|
rlm@340
|
1061 (File. "/home/r/proj/cortex/render/worm-audio/frames"))
|
rlm@283
|
1062 (com.aurellem.capture.Capture/captureAudio
|
rlm@283
|
1063 world
|
rlm@340
|
1064 (File. "/home/r/proj/cortex/render/worm-audio/audio.wav")))))
|
rlm@221
|
1065
|
rlm@283
|
1066 (fn [world tpf]
|
rlm@283
|
1067 (hearing-display
|
rlm@283
|
1068 (map #(% world) hearing)
|
rlm@283
|
1069 (if record?
|
rlm@283
|
1070 (File. "/home/r/proj/cortex/render/worm-audio/hearing-data"))))))))
|
rlm@221
|
1071 #+end_src
|
rlm@221
|
1072
|
rlm@340
|
1073 #+results: test-hearing-2
|
rlm@340
|
1074 : #'cortex.test.hearing/test-worm-hearing
|
rlm@340
|
1075
|
rlm@221
|
1076 In this test, I load the worm with its newly formed ear and let it
|
rlm@221
|
1077 hear sounds. The sound the worm is hearing is localized to the origin
|
rlm@221
|
1078 of the world, and you can see that as the worm moves farther away from
|
rlm@221
|
1079 the origin when it is hit by balls, it hears the sound less intensely.
|
rlm@221
|
1080
|
rlm@221
|
1081 The sound you hear in the video is from the worm's perspective. Notice
|
rlm@221
|
1082 how the pure tone becomes fainter and the visual display of the
|
rlm@221
|
1083 auditory data becomes less pronounced as the worm falls farther away
|
rlm@221
|
1084 from the source of the sound.
|
rlm@221
|
1085
|
rlm@221
|
1086 #+begin_html
|
rlm@221
|
1087 <div class="figure">
|
rlm@221
|
1088 <center>
|
rlm@221
|
1089 <video controls="controls" width="600">
|
rlm@221
|
1090 <source src="../video/worm-hearing.ogg" type="video/ogg"
|
rlm@221
|
1091 preload="none" poster="../images/aurellem-1280x480.png" />
|
rlm@221
|
1092 </video>
|
rlm@309
|
1093 <br> <a href="http://youtu.be/KLUtV1TNksI"> YouTube </a>
|
rlm@221
|
1094 </center>
|
rlm@221
|
1095 <p>The worm can now hear the sound pulses produced from the
|
rlm@221
|
1096 hymn. Notice the strikingly different pattern that human speech
|
rlm@306
|
1097 makes compared to the instruments. Once the worm is pushed off the
|
rlm@221
|
1098 floor, the sound it hears is attenuated, and the display of the
|
rlm@306
|
1099 sound it hears becomes fainter. This shows the 3D localization of
|
rlm@221
|
1100 sound in this world.</p>
|
rlm@221
|
1101 </div>
|
rlm@221
|
1102 #+end_html
|
rlm@221
|
1103
|
rlm@221
|
1104 *** Creating the Ear Video
|
rlm@221
|
1105 #+name: magick-3
|
rlm@221
|
1106 #+begin_src clojure
|
rlm@221
|
1107 (ns cortex.video.magick3
|
rlm@221
|
1108 (:import java.io.File)
|
rlm@316
|
1109 (:use clojure.java.shell))
|
rlm@221
|
1110
|
rlm@221
|
1111 (defn images [path]
|
rlm@221
|
1112 (sort (rest (file-seq (File. path)))))
|
rlm@221
|
1113
|
rlm@221
|
1114 (def base "/home/r/proj/cortex/render/worm-audio/")
|
rlm@221
|
1115
|
rlm@221
|
1116 (defn pics [file]
|
rlm@221
|
1117 (images (str base file)))
|
rlm@221
|
1118
|
rlm@221
|
1119 (defn combine-images []
|
rlm@221
|
1120 (let [main-view (pics "frames")
|
rlm@221
|
1121 hearing (pics "hearing-data")
|
rlm@221
|
1122 background (repeat 9001 (File. (str base "background.png")))
|
rlm@221
|
1123 targets (map
|
rlm@221
|
1124 #(File. (str base "out/" (format "%07d.png" %)))
|
rlm@221
|
1125 (range 0 (count main-view)))]
|
rlm@221
|
1126 (dorun
|
rlm@221
|
1127 (pmap
|
rlm@221
|
1128 (comp
|
rlm@221
|
1129 (fn [[background main-view hearing target]]
|
rlm@221
|
1130 (println target)
|
rlm@221
|
1131 (sh "convert"
|
rlm@221
|
1132 background
|
rlm@221
|
1133 main-view "-geometry" "+66+21" "-composite"
|
rlm@221
|
1134 hearing "-geometry" "+21+526" "-composite"
|
rlm@221
|
1135 target))
|
rlm@221
|
1136 (fn [& args] (map #(.getCanonicalPath %) args)))
|
rlm@221
|
1137 background main-view hearing targets))))
|
rlm@221
|
1138 #+end_src
|
rlm@221
|
1139
|
rlm@311
|
1140 #+begin_src sh :results silent
|
rlm@221
|
1141 cd /home/r/proj/cortex/render/worm-audio
|
rlm@221
|
1142 ffmpeg -r 60 -i out/%07d.png -i audio.wav \
|
rlm@221
|
1143 -b:a 128k -b:v 9001k \
|
rlm@311
|
1144 -c:a libvorbis -c:v -g 60 libtheora worm-hearing.ogg
|
rlm@221
|
1145 #+end_src
|
rlm@220
|
1146
|
rlm@220
|
1147 * Headers
|
rlm@220
|
1148
|
rlm@220
|
1149 #+name: hearing-header
|
rlm@220
|
1150 #+begin_src clojure
|
rlm@220
|
1151 (ns cortex.hearing
|
rlm@220
|
1152 "Simulate the sense of hearing in jMonkeyEngine3. Enables multiple
|
rlm@220
|
1153 listeners at different positions in the same world. Automatically
|
rlm@220
|
1154 reads ear-nodes from specially prepared blender files and
|
rlm@221
|
1155 instantiates them in the world as simulated ears."
|
rlm@220
|
1156 {:author "Robert McIntyre"}
|
rlm@220
|
1157 (:use (cortex world util sense))
|
rlm@220
|
1158 (:import java.nio.ByteBuffer)
|
rlm@220
|
1159 (:import java.awt.image.BufferedImage)
|
rlm@220
|
1160 (:import org.tritonus.share.sampled.FloatSampleTools)
|
rlm@220
|
1161 (:import (com.aurellem.capture.audio
|
rlm@220
|
1162 SoundProcessor AudioSendRenderer))
|
rlm@220
|
1163 (:import javax.sound.sampled.AudioFormat)
|
rlm@220
|
1164 (:import (com.jme3.scene Spatial Node))
|
rlm@220
|
1165 (:import com.jme3.audio.Listener)
|
rlm@220
|
1166 (:import com.jme3.app.Application)
|
rlm@220
|
1167 (:import com.jme3.scene.control.AbstractControl))
|
rlm@220
|
1168 #+end_src
|
rlm@220
|
1169
|
rlm@221
|
1170 #+name: test-header
|
rlm@220
|
1171 #+begin_src clojure
|
rlm@220
|
1172 (ns cortex.test.hearing
|
rlm@283
|
1173 (:use (cortex world util hearing body))
|
rlm@221
|
1174 (:use cortex.test.body)
|
rlm@220
|
1175 (:import (com.jme3.audio AudioNode Listener))
|
rlm@283
|
1176 (:import java.io.File)
|
rlm@220
|
1177 (:import com.jme3.scene.Node
|
rlm@283
|
1178 com.jme3.system.AppSettings
|
rlm@340
|
1179 com.jme3.math.Vector3f)
|
rlm@340
|
1180 (:import (com.aurellem.capture Capture IsoTimer RatchetTimer)))
|
rlm@220
|
1181 #+end_src
|
rlm@220
|
1182
|
rlm@340
|
1183 #+results: test-header
|
rlm@340
|
1184 : com.aurellem.capture.RatchetTimer
|
rlm@340
|
1185
|
rlm@222
|
1186 * Source Listing
|
rlm@222
|
1187 - [[../src/cortex/hearing.clj][cortex.hearing]]
|
rlm@222
|
1188 - [[../src/cortex/test/hearing.clj][cortex.test.hearing]]
|
rlm@222
|
1189 #+html: <ul> <li> <a href="../org/hearing.org">This org file</a> </li> </ul>
|
rlm@222
|
1190 - [[http://hg.bortreb.com ][source-repository]]
|
rlm@222
|
1191
|
rlm@220
|
1192 * Next
|
rlm@222
|
1193 The worm can see and hear, but it can't feel the world or
|
rlm@222
|
1194 itself. Next post, I'll give the worm a [[./touch.org][sense of touch]].
|
rlm@162
|
1195
|
rlm@162
|
1196
|
rlm@220
|
1197
|
rlm@162
|
1198 * COMMENT Code Generation
|
rlm@162
|
1199
|
rlm@163
|
1200 #+begin_src clojure :tangle ../src/cortex/hearing.clj
|
rlm@220
|
1201 <<hearing-header>>
|
rlm@221
|
1202 <<hearing-pipeline>>
|
rlm@221
|
1203 <<hearing-ears>>
|
rlm@221
|
1204 <<hearing-kernel>>
|
rlm@221
|
1205 <<hearing-display>>
|
rlm@162
|
1206 #+end_src
|
rlm@162
|
1207
|
rlm@162
|
1208 #+begin_src clojure :tangle ../src/cortex/test/hearing.clj
|
rlm@221
|
1209 <<test-header>>
|
rlm@221
|
1210 <<test-hearing-1>>
|
rlm@221
|
1211 <<test-hearing-2>>
|
rlm@221
|
1212 #+end_src
|
rlm@221
|
1213
|
rlm@221
|
1214 #+begin_src clojure :tangle ../src/cortex/video/magick3.clj
|
rlm@221
|
1215 <<magick-3>>
|
rlm@162
|
1216 #+end_src
|
rlm@162
|
1217
|
rlm@162
|
1218 #+begin_src C :tangle ../../audio-send/Alc/backends/send.c
|
rlm@162
|
1219 <<send-header>>
|
rlm@162
|
1220 <<send-state>>
|
rlm@162
|
1221 <<sync-macros>>
|
rlm@162
|
1222 <<sync-sources>>
|
rlm@162
|
1223 <<sync-contexts>>
|
rlm@162
|
1224 <<context-creation>>
|
rlm@162
|
1225 <<context-switching>>
|
rlm@162
|
1226 <<main-loop>>
|
rlm@162
|
1227 <<jni-step>>
|
rlm@162
|
1228 <<jni-get-samples>>
|
rlm@162
|
1229 <<listener-manage>>
|
rlm@162
|
1230 <<jni-init>>
|
rlm@162
|
1231 <<device-init>>
|
rlm@162
|
1232 #+end_src
|
rlm@162
|
1233
|
rlm@162
|
1234
|