rlm@59
|
1 ======Capture Audio/Video to a File======
|
rlm@59
|
2
|
rlm@61
|
3 <note>A/V recording is still in development. It works for all of jMonkeyEngine's test cases. If you experience any problems or
|
rlm@61
|
4 of something isn't clear, please let me know. -- bortreb</note>
|
rlm@61
|
5
|
rlm@59
|
6 So you've made your cool new JMonkeyEngine3 game and you want to
|
rlm@59
|
7 create a demo video to show off your hard work. Or maybe you want to
|
rlm@59
|
8 make a cutscene for your game using the physics and characters in the
|
rlm@59
|
9 game itself. Screen capturing is the most straightforward way to do
|
rlm@59
|
10 this, but it can slow down your game and produce low-quality video and
|
rlm@59
|
11 audio as a result. A better way is to record video and audio directly
|
rlm@59
|
12 from the game while it is running.
|
rlm@59
|
13
|
rlm@61
|
14 <note tip>Combine this method with jMonkeyEngine's [[:jme3:advanced:Cinematics]] feature to record high-quality game trailers!</note>
|
rlm@60
|
15
|
rlm@60
|
16 ===== Simple Way =====
|
rlm@60
|
17
|
rlm@61
|
18 First off, if all you just want to record video at 30fps with no sound, then look
|
rlm@60
|
19 no further then jMonkeyEngine3's build in ''VideoRecorderAppState''
|
rlm@60
|
20 class.
|
rlm@60
|
21
|
rlm@61
|
22 Add the following code to your ''simpleInitApp()'' method.
|
rlm@60
|
23
|
rlm@60
|
24 <code java>
|
rlm@60
|
25 stateManager.attach(new VideoRecorderAppState()); //start recording
|
rlm@60
|
26 </code>
|
rlm@60
|
27
|
rlm@60
|
28 The game will run slow, but the recording will be in high-quality and
|
rlm@60
|
29 normal speed. The video files will be stored in your user home
|
rlm@60
|
30 directory, if you want to save to another file, specify it in the
|
rlm@60
|
31 VideoRecorderAppState constructor. Recording starts when the state is
|
rlm@60
|
32 attached and ends when the application quits or the state is detached.
|
rlm@60
|
33
|
rlm@60
|
34 That's all!
|
rlm@60
|
35
|
rlm@60
|
36 ===== Advanced Way =====
|
rlm@60
|
37
|
rlm@60
|
38 If you want to record audio as well, record at different framerates,
|
rlm@60
|
39 or record from multiple viewpoints at once, then there's a full
|
rlm@60
|
40 solution for doing this already made for you here:
|
rlm@59
|
41
|
rlm@59
|
42 http://www.aurellem.com/releases/jmeCapture-latest.zip
|
rlm@61
|
43
|
rlm@59
|
44 http://www.aurellem.com/releases/jmeCapture-latest.tar.bz2
|
rlm@59
|
45
|
rlm@59
|
46 Download the archive in your preferred format, extract,
|
rlm@59
|
47 add the jars to your project, and you are ready to go.
|
rlm@59
|
48
|
rlm@59
|
49 The javadoc is here:
|
rlm@59
|
50 http://www.aurellem.com/jmeCapture/docs/
|
rlm@59
|
51
|
rlm@60
|
52 To capture video and audio you use the
|
rlm@59
|
53 ''com.aurellem.capture.Capture'' class, which has two methods,
|
rlm@61
|
54 ''captureAudio()'' and ''captureVideo()'', and the
|
rlm@61
|
55 ''com.aurellem.capture.IsoTimer'' class, which sets the audio and
|
rlm@60
|
56 video framerate.
|
rlm@59
|
57
|
rlm@59
|
58 The steps are as simple as:
|
rlm@59
|
59
|
rlm@59
|
60 <code java>
|
rlm@59
|
61 yourApp.setTimer(new IsoTimer(desiredFramesPerSecond));
|
rlm@59
|
62 </code>
|
rlm@59
|
63
|
rlm@59
|
64 This causes jMonkeyEngine to take as much time as it needs to fully
|
rlm@59
|
65 calculate every frame of the video and audio. You will see your game
|
rlm@59
|
66 speed up and slow down depending on how computationally demanding your
|
rlm@59
|
67 game is, but the final recorded audio and video will be perfectly
|
rlm@59
|
68 sychronized and will run at exactly the fps which you specified.
|
rlm@59
|
69
|
rlm@59
|
70 <code java>
|
rlm@59
|
71 captureVideo(yourApp, targetVideoFile);
|
rlm@59
|
72 captureAudio(yourApp, targetAudioFile);
|
rlm@59
|
73 </code>
|
rlm@59
|
74
|
rlm@59
|
75 These will cause the app to record audio and video when it is run.
|
rlm@59
|
76 Audio and video will stop being recorded when the app stops. Your
|
rlm@59
|
77 audio will be recorded as a 44,100 Hz linear PCM wav file, while the
|
rlm@59
|
78 video will be recorded according to the following rules:
|
rlm@59
|
79
|
rlm@59
|
80 1.) (Preferred) If you supply an empty directory as the file, then
|
rlm@61
|
81 the video will be saved as a sequence of .png files, one file per
|
rlm@61
|
82 frame. The files start at 0000000.png and increment from there.
|
rlm@61
|
83 You can then combine the frames into your preferred
|
rlm@61
|
84 container/codec. If the directory is not empty, then writing
|
rlm@61
|
85 video frames to it will fail, and nothing will be written.
|
rlm@59
|
86
|
rlm@59
|
87 2.) If the filename ends in ".avi" then the frames will be encoded as
|
rlm@61
|
88 a RAW stream inside an AVI 1.0 container. The resulting file
|
rlm@61
|
89 will be quite large and you will probably want to re-encode it to
|
rlm@61
|
90 your preferred container/codec format. Be advised that some
|
rlm@61
|
91 video payers cannot process AVI with a RAW stream, and that AVI
|
rlm@61
|
92 1.0 files generated by this method that exceed 2.0GB are invalid
|
rlm@61
|
93 according to the AVI 1.0 spec (but many programs can still deal
|
rlm@61
|
94 with them.) Thanks to Werner Randelshofer for his excellent work
|
rlm@61
|
95 which made the AVI file writer option possible.
|
rlm@59
|
96
|
rlm@59
|
97 3.) Any non-directory file ending in anything other than ".avi" will
|
rlm@61
|
98 be processed through Xuggle. Xuggle provides the option to use
|
rlm@61
|
99 many codecs/containers, but you will have to install it on your
|
rlm@61
|
100 system yourself in order to use this option. Please visit
|
rlm@61
|
101 http://www.xuggle.com/ to learn how to do this.
|
rlm@59
|
102
|
rlm@60
|
103 Note that you will not hear any sound if you choose to record sound to
|
rlm@60
|
104 a file.
|
rlm@60
|
105
|
rlm@60
|
106 ==== Basic Example ====
|
rlm@60
|
107
|
rlm@60
|
108 Here is a complete example showing how to capture both audio and video
|
rlm@60
|
109 from one of jMonkeyEngine3's advanced demo applications.
|
rlm@60
|
110
|
rlm@60
|
111 <code java>
|
rlm@60
|
112 import java.io.File;
|
rlm@60
|
113 import java.io.IOException;
|
rlm@60
|
114
|
rlm@60
|
115 import jme3test.water.TestPostWater;
|
rlm@60
|
116
|
rlm@60
|
117 import com.aurellem.capture.Capture;
|
rlm@60
|
118 import com.aurellem.capture.IsoTimer;
|
rlm@60
|
119 import com.jme3.app.SimpleApplication;
|
rlm@60
|
120
|
rlm@60
|
121
|
rlm@60
|
122 /**
|
rlm@60
|
123 * Demonstrates how to use basic Audio/Video capture with a
|
rlm@60
|
124 * jMonkeyEngine application. You can use these techniques to make
|
rlm@60
|
125 * high quality cutscenes or demo videos, even on very slow laptops.
|
rlm@60
|
126 *
|
rlm@60
|
127 * @author Robert McIntyre
|
rlm@60
|
128 */
|
rlm@60
|
129
|
rlm@60
|
130 public class Basic {
|
rlm@60
|
131
|
rlm@60
|
132 public static void main(String[] ignore) throws IOException{
|
rlm@60
|
133 File video = File.createTempFile("JME-water-video", ".avi");
|
rlm@60
|
134 File audio = File.createTempFile("JME-water-audio", ".wav");
|
rlm@60
|
135
|
rlm@60
|
136 SimpleApplication app = new TestPostWater();
|
rlm@60
|
137 app.setTimer(new IsoTimer(60));
|
rlm@60
|
138 app.setShowSettings(false);
|
rlm@60
|
139
|
rlm@60
|
140 Capture.captureVideo(app, video);
|
rlm@60
|
141 Capture.captureAudio(app, audio);
|
rlm@60
|
142
|
rlm@60
|
143 app.start();
|
rlm@60
|
144
|
rlm@60
|
145 System.out.println(video.getCanonicalPath());
|
rlm@60
|
146 System.out.println(audio.getCanonicalPath());
|
rlm@60
|
147 }
|
rlm@60
|
148 }
|
rlm@60
|
149 </code>
|
rlm@60
|
150
|
rlm@60
|
151 ==== How it works ====
|
rlm@60
|
152
|
rlm@61
|
153 A standard JME3 application that extends ''SimpleApplication'' or
|
rlm@61
|
154 ''Application'' tries as hard as it can to keep in sync with
|
rlm@61
|
155 //user-time//. If a ball is rolling at 1 game-mile per game-hour in the
|
rlm@60
|
156 game, and you wait for one user-hour as measured by the clock on your
|
rlm@60
|
157 wall, then the ball should have traveled exactly one game-mile. In
|
rlm@60
|
158 order to keep sync with the real world, the game throttles its physics
|
rlm@60
|
159 engine and graphics display. If the computations involved in running
|
rlm@60
|
160 the game are too intense, then the game will first skip frames, then
|
rlm@60
|
161 sacrifice physics accuracy. If there are particuraly demanding
|
rlm@60
|
162 computations, then you may only get 1 fps, and the ball may tunnel
|
rlm@60
|
163 through the floor or obstacles due to inaccurate physics simulation,
|
rlm@60
|
164 but after the end of one user-hour, that ball will have traveled one
|
rlm@60
|
165 game-mile.
|
rlm@60
|
166
|
rlm@60
|
167 When we're recording video, we don't care if the game-time syncs with
|
rlm@60
|
168 user-time, but instead whether the time in the recorded video
|
rlm@60
|
169 (video-time) syncs with user-time. To continue the analogy, if we
|
rlm@60
|
170 recorded the ball rolling at 1 game-mile per game-hour and watched the
|
rlm@60
|
171 video later, we would want to see 30 fps video of the ball rolling at
|
rlm@61
|
172 1 video-mile per //user-hour//. It doesn't matter how much user-time it
|
rlm@60
|
173 took to simulate that hour of game-time to make the high-quality
|
rlm@60
|
174 recording.
|
rlm@60
|
175
|
rlm@60
|
176 The IsoTimer ignores real-time and always reports that the same amount
|
rlm@60
|
177 of time has passed every time it is called. That way, one can put code
|
rlm@60
|
178 to write each video/audio frame to a file without worrying about that
|
rlm@60
|
179 code itself slowing down the game to the point where the recording
|
rlm@60
|
180 would be useless.
|
rlm@60
|
181
|
rlm@60
|
182
|
rlm@60
|
183 ==== Advanced Example ====
|
rlm@60
|
184
|
rlm@60
|
185 The package from aurellem.com was made for AI research and can do more
|
rlm@60
|
186 than just record a single stream of audio and video. You can use it
|
rlm@60
|
187 to:
|
rlm@60
|
188
|
rlm@60
|
189 1.) Create multiple independent listeners that each hear the world
|
rlm@60
|
190 from their own perspective.
|
rlm@60
|
191
|
rlm@60
|
192 2.) Process the sound data in any way you wish.
|
rlm@60
|
193
|
rlm@60
|
194 3.) Do the same for visual data.
|
rlm@60
|
195
|
rlm@60
|
196 Here is a more advanced example, which can also be found along with
|
rlm@60
|
197 other examples in the jmeCapture.jar file included in the
|
rlm@60
|
198 distribution.
|
rlm@60
|
199
|
rlm@60
|
200 <code java>
|
rlm@60
|
201 package com.aurellem.capture.examples;
|
rlm@60
|
202
|
rlm@60
|
203 import java.io.File;
|
rlm@60
|
204 import java.io.FileNotFoundException;
|
rlm@60
|
205 import java.io.IOException;
|
rlm@60
|
206 import java.lang.reflect.Field;
|
rlm@60
|
207 import java.nio.ByteBuffer;
|
rlm@60
|
208
|
rlm@60
|
209 import javax.sound.sampled.AudioFormat;
|
rlm@60
|
210
|
rlm@60
|
211 import org.tritonus.share.sampled.FloatSampleTools;
|
rlm@60
|
212
|
rlm@60
|
213 import com.aurellem.capture.AurellemSystemDelegate;
|
rlm@60
|
214 import com.aurellem.capture.Capture;
|
rlm@60
|
215 import com.aurellem.capture.IsoTimer;
|
rlm@60
|
216 import com.aurellem.capture.audio.CompositeSoundProcessor;
|
rlm@60
|
217 import com.aurellem.capture.audio.MultiListener;
|
rlm@60
|
218 import com.aurellem.capture.audio.SoundProcessor;
|
rlm@60
|
219 import com.aurellem.capture.audio.WaveFileWriter;
|
rlm@60
|
220 import com.jme3.app.SimpleApplication;
|
rlm@60
|
221 import com.jme3.audio.AudioNode;
|
rlm@60
|
222 import com.jme3.audio.Listener;
|
rlm@60
|
223 import com.jme3.cinematic.MotionPath;
|
rlm@60
|
224 import com.jme3.cinematic.events.AbstractCinematicEvent;
|
rlm@60
|
225 import com.jme3.cinematic.events.MotionTrack;
|
rlm@60
|
226 import com.jme3.material.Material;
|
rlm@60
|
227 import com.jme3.math.ColorRGBA;
|
rlm@60
|
228 import com.jme3.math.FastMath;
|
rlm@60
|
229 import com.jme3.math.Quaternion;
|
rlm@60
|
230 import com.jme3.math.Vector3f;
|
rlm@60
|
231 import com.jme3.scene.Geometry;
|
rlm@60
|
232 import com.jme3.scene.Node;
|
rlm@60
|
233 import com.jme3.scene.shape.Box;
|
rlm@60
|
234 import com.jme3.scene.shape.Sphere;
|
rlm@60
|
235 import com.jme3.system.AppSettings;
|
rlm@60
|
236 import com.jme3.system.JmeSystem;
|
rlm@60
|
237
|
rlm@60
|
238 /**
|
rlm@60
|
239 *
|
rlm@60
|
240 * Demonstrates advanced use of the audio capture and recording
|
rlm@60
|
241 * features. Multiple perspectives of the same scene are
|
rlm@60
|
242 * simultaneously rendered to different sound files.
|
rlm@60
|
243 *
|
rlm@60
|
244 * A key limitation of the way multiple listeners are implemented is
|
rlm@60
|
245 * that only 3D positioning effects are realized for listeners other
|
rlm@60
|
246 * than the main LWJGL listener. This means that audio effects such
|
rlm@60
|
247 * as environment settings will *not* be heard on any auxiliary
|
rlm@60
|
248 * listeners, though sound attenuation will work correctly.
|
rlm@60
|
249 *
|
rlm@60
|
250 * Multiple listeners as realized here might be used to make AI
|
rlm@60
|
251 * entities that can each hear the world from their own perspective.
|
rlm@60
|
252 *
|
rlm@60
|
253 * @author Robert McIntyre
|
rlm@60
|
254 */
|
rlm@60
|
255
|
rlm@60
|
256 public class Advanced extends SimpleApplication {
|
rlm@60
|
257
|
rlm@60
|
258 /**
|
rlm@60
|
259 * You will see three grey cubes, a blue sphere, and a path which
|
rlm@60
|
260 * circles each cube. The blue sphere is generating a constant
|
rlm@60
|
261 * monotone sound as it moves along the track. Each cube is
|
rlm@60
|
262 * listening for sound; when a cube hears sound whose intensity is
|
rlm@60
|
263 * greater than a certain threshold, it changes its color from
|
rlm@60
|
264 * grey to green.
|
rlm@60
|
265 *
|
rlm@60
|
266 * Each cube is also saving whatever it hears to a file. The
|
rlm@60
|
267 * scene from the perspective of the viewer is also saved to a
|
rlm@60
|
268 * video file. When you listen to each of the sound files
|
rlm@60
|
269 * alongside the video, the sound will get louder when the sphere
|
rlm@60
|
270 * approaches the cube that generated that sound file. This
|
rlm@60
|
271 * shows that each listener is hearing the world from its own
|
rlm@60
|
272 * perspective.
|
rlm@60
|
273 *
|
rlm@60
|
274 */
|
rlm@60
|
275 public static void main(String[] args) {
|
rlm@60
|
276 Advanced app = new Advanced();
|
rlm@60
|
277 AppSettings settings = new AppSettings(true);
|
rlm@60
|
278 settings.setAudioRenderer(AurellemSystemDelegate.SEND);
|
rlm@60
|
279 JmeSystem.setSystemDelegate(new AurellemSystemDelegate());
|
rlm@60
|
280 app.setSettings(settings);
|
rlm@60
|
281 app.setShowSettings(false);
|
rlm@60
|
282 app.setPauseOnLostFocus(false);
|
rlm@60
|
283
|
rlm@60
|
284 try {
|
rlm@60
|
285 Capture.captureVideo(app, File.createTempFile("advanced",".avi"));
|
rlm@60
|
286 Capture.captureAudio(app, File.createTempFile("advanced", ".wav"));
|
rlm@60
|
287 }
|
rlm@60
|
288 catch (IOException e) {e.printStackTrace();}
|
rlm@60
|
289
|
rlm@60
|
290 app.start();
|
rlm@60
|
291 }
|
rlm@60
|
292
|
rlm@60
|
293
|
rlm@60
|
294 private Geometry bell;
|
rlm@60
|
295 private Geometry ear1;
|
rlm@60
|
296 private Geometry ear2;
|
rlm@60
|
297 private Geometry ear3;
|
rlm@60
|
298 private AudioNode music;
|
rlm@60
|
299 private MotionTrack motionControl;
|
rlm@60
|
300
|
rlm@60
|
301 private Geometry makeEar(Node root, Vector3f position){
|
rlm@60
|
302 Material mat = new Material(assetManager, "Common/MatDefs/Misc/Unshaded.j3md");
|
rlm@60
|
303 Geometry ear = new Geometry("ear", new Box(1.0f, 1.0f, 1.0f));
|
rlm@60
|
304 ear.setLocalTranslation(position);
|
rlm@60
|
305 mat.setColor("Color", ColorRGBA.Green);
|
rlm@60
|
306 ear.setMaterial(mat);
|
rlm@60
|
307 root.attachChild(ear);
|
rlm@60
|
308 return ear;
|
rlm@60
|
309 }
|
rlm@60
|
310
|
rlm@60
|
311 private Vector3f[] path = new Vector3f[]{
|
rlm@60
|
312 // loop 1
|
rlm@60
|
313 new Vector3f(0, 0, 0),
|
rlm@60
|
314 new Vector3f(0, 0, -10),
|
rlm@60
|
315 new Vector3f(-2, 0, -14),
|
rlm@60
|
316 new Vector3f(-6, 0, -20),
|
rlm@60
|
317 new Vector3f(0, 0, -26),
|
rlm@60
|
318 new Vector3f(6, 0, -20),
|
rlm@60
|
319 new Vector3f(0, 0, -14),
|
rlm@60
|
320 new Vector3f(-6, 0, -20),
|
rlm@60
|
321 new Vector3f(0, 0, -26),
|
rlm@60
|
322 new Vector3f(6, 0, -20),
|
rlm@60
|
323 // loop 2
|
rlm@60
|
324 new Vector3f(5, 0, -5),
|
rlm@60
|
325 new Vector3f(7, 0, 1.5f),
|
rlm@60
|
326 new Vector3f(14, 0, 2),
|
rlm@60
|
327 new Vector3f(20, 0, 6),
|
rlm@60
|
328 new Vector3f(26, 0, 0),
|
rlm@60
|
329 new Vector3f(20, 0, -6),
|
rlm@60
|
330 new Vector3f(14, 0, 0),
|
rlm@60
|
331 new Vector3f(20, 0, 6),
|
rlm@60
|
332 new Vector3f(26, 0, 0),
|
rlm@60
|
333 new Vector3f(20, 0, -6),
|
rlm@60
|
334 new Vector3f(14, 0, 0),
|
rlm@60
|
335 // loop 3
|
rlm@60
|
336 new Vector3f(8, 0, 7.5f),
|
rlm@60
|
337 new Vector3f(7, 0, 10.5f),
|
rlm@60
|
338 new Vector3f(6, 0, 20),
|
rlm@60
|
339 new Vector3f(0, 0, 26),
|
rlm@60
|
340 new Vector3f(-6, 0, 20),
|
rlm@60
|
341 new Vector3f(0, 0, 14),
|
rlm@60
|
342 new Vector3f(6, 0, 20),
|
rlm@60
|
343 new Vector3f(0, 0, 26),
|
rlm@60
|
344 new Vector3f(-6, 0, 20),
|
rlm@60
|
345 new Vector3f(0, 0, 14),
|
rlm@60
|
346 // begin ellipse
|
rlm@60
|
347 new Vector3f(16, 5, 20),
|
rlm@60
|
348 new Vector3f(0, 0, 26),
|
rlm@60
|
349 new Vector3f(-16, -10, 20),
|
rlm@60
|
350 new Vector3f(0, 0, 14),
|
rlm@60
|
351 new Vector3f(16, 20, 20),
|
rlm@60
|
352 new Vector3f(0, 0, 26),
|
rlm@60
|
353 new Vector3f(-10, -25, 10),
|
rlm@60
|
354 new Vector3f(-10, 0, 0),
|
rlm@60
|
355 // come at me!
|
rlm@60
|
356 new Vector3f(-28.00242f, 48.005623f, -34.648228f),
|
rlm@60
|
357 new Vector3f(0, 0 , -20),
|
rlm@60
|
358 };
|
rlm@60
|
359
|
rlm@60
|
360 private void createScene() {
|
rlm@60
|
361 Material mat = new Material(assetManager, "Common/MatDefs/Misc/Unshaded.j3md");
|
rlm@60
|
362 bell = new Geometry( "sound-emitter" , new Sphere(15,15,1));
|
rlm@60
|
363 mat.setColor("Color", ColorRGBA.Blue);
|
rlm@60
|
364 bell.setMaterial(mat);
|
rlm@60
|
365 rootNode.attachChild(bell);
|
rlm@60
|
366
|
rlm@60
|
367 ear1 = makeEar(rootNode, new Vector3f(0, 0 ,-20));
|
rlm@60
|
368 ear2 = makeEar(rootNode, new Vector3f(0, 0 ,20));
|
rlm@60
|
369 ear3 = makeEar(rootNode, new Vector3f(20, 0 ,0));
|
rlm@60
|
370
|
rlm@60
|
371 MotionPath track = new MotionPath();
|
rlm@60
|
372
|
rlm@60
|
373 for (Vector3f v : path){
|
rlm@60
|
374 track.addWayPoint(v);
|
rlm@60
|
375 }
|
rlm@60
|
376 track.setCurveTension(0.80f);
|
rlm@60
|
377
|
rlm@60
|
378 motionControl = new MotionTrack(bell,track);
|
rlm@60
|
379
|
rlm@60
|
380 // for now, use reflection to change the timer...
|
rlm@60
|
381 // motionControl.setTimer(new IsoTimer(60));
|
rlm@60
|
382 try {
|
rlm@60
|
383 Field timerField;
|
rlm@60
|
384 timerField = AbstractCinematicEvent.class.getDeclaredField("timer");
|
rlm@60
|
385 timerField.setAccessible(true);
|
rlm@60
|
386 try {timerField.set(motionControl, new IsoTimer(60));}
|
rlm@60
|
387 catch (IllegalArgumentException e) {e.printStackTrace();}
|
rlm@60
|
388 catch (IllegalAccessException e) {e.printStackTrace();}
|
rlm@60
|
389 }
|
rlm@60
|
390 catch (SecurityException e) {e.printStackTrace();}
|
rlm@60
|
391 catch (NoSuchFieldException e) {e.printStackTrace();}
|
rlm@60
|
392
|
rlm@60
|
393 motionControl.setDirectionType(MotionTrack.Direction.PathAndRotation);
|
rlm@60
|
394 motionControl.setRotation(new Quaternion().fromAngleNormalAxis(-FastMath.HALF_PI, Vector3f.UNIT_Y));
|
rlm@60
|
395 motionControl.setInitialDuration(20f);
|
rlm@60
|
396 motionControl.setSpeed(1f);
|
rlm@60
|
397
|
rlm@60
|
398 track.enableDebugShape(assetManager, rootNode);
|
rlm@60
|
399 positionCamera();
|
rlm@60
|
400 }
|
rlm@60
|
401
|
rlm@60
|
402
|
rlm@60
|
403 private void positionCamera(){
|
rlm@60
|
404 this.cam.setLocation(new Vector3f(-28.00242f, 48.005623f, -34.648228f));
|
rlm@60
|
405 this.cam.setRotation(new Quaternion(0.3359635f, 0.34280345f, -0.13281013f, 0.8671653f));
|
rlm@60
|
406 }
|
rlm@60
|
407
|
rlm@60
|
408 private void initAudio() {
|
rlm@60
|
409 org.lwjgl.input.Mouse.setGrabbed(false);
|
rlm@60
|
410 music = new AudioNode(assetManager, "Sound/Effects/Beep.ogg", false);
|
rlm@60
|
411
|
rlm@60
|
412 rootNode.attachChild(music);
|
rlm@60
|
413 audioRenderer.playSource(music);
|
rlm@60
|
414 music.setPositional(true);
|
rlm@60
|
415 music.setVolume(1f);
|
rlm@60
|
416 music.setReverbEnabled(false);
|
rlm@60
|
417 music.setDirectional(false);
|
rlm@60
|
418 music.setMaxDistance(200.0f);
|
rlm@60
|
419 music.setRefDistance(1f);
|
rlm@60
|
420 //music.setRolloffFactor(1f);
|
rlm@60
|
421 music.setLooping(false);
|
rlm@60
|
422 audioRenderer.pauseSource(music);
|
rlm@60
|
423 }
|
rlm@60
|
424
|
rlm@60
|
425 public class Dancer implements SoundProcessor {
|
rlm@60
|
426 Geometry entity;
|
rlm@60
|
427 float scale = 2;
|
rlm@60
|
428 public Dancer(Geometry entity){
|
rlm@60
|
429 this.entity = entity;
|
rlm@60
|
430 }
|
rlm@60
|
431
|
rlm@60
|
432 /**
|
rlm@60
|
433 * this method is irrelevant since there is no state to cleanup.
|
rlm@60
|
434 */
|
rlm@60
|
435 public void cleanup() {}
|
rlm@60
|
436
|
rlm@60
|
437
|
rlm@60
|
438 /**
|
rlm@60
|
439 * Respond to sound! This is the brain of an AI entity that
|
rlm@60
|
440 * hears it's surroundings and reacts to them.
|
rlm@60
|
441 */
|
rlm@60
|
442 public void process(ByteBuffer audioSamples, int numSamples, AudioFormat format) {
|
rlm@60
|
443 audioSamples.clear();
|
rlm@60
|
444 byte[] data = new byte[numSamples];
|
rlm@60
|
445 float[] out = new float[numSamples];
|
rlm@60
|
446 audioSamples.get(data);
|
rlm@60
|
447 FloatSampleTools.byte2floatInterleaved(data, 0, out, 0,
|
rlm@60
|
448 numSamples/format.getFrameSize(), format);
|
rlm@60
|
449
|
rlm@60
|
450 float max = Float.NEGATIVE_INFINITY;
|
rlm@60
|
451 for (float f : out){if (f > max) max = f;}
|
rlm@60
|
452 audioSamples.clear();
|
rlm@60
|
453
|
rlm@60
|
454 if (max > 0.1){entity.getMaterial().setColor("Color", ColorRGBA.Green);}
|
rlm@60
|
455 else {entity.getMaterial().setColor("Color", ColorRGBA.Gray);}
|
rlm@60
|
456 }
|
rlm@60
|
457 }
|
rlm@60
|
458
|
rlm@60
|
459 private void prepareEar(Geometry ear, int n){
|
rlm@60
|
460 if (this.audioRenderer instanceof MultiListener){
|
rlm@60
|
461 MultiListener rf = (MultiListener)this.audioRenderer;
|
rlm@60
|
462
|
rlm@60
|
463 Listener auxListener = new Listener();
|
rlm@60
|
464 auxListener.setLocation(ear.getLocalTranslation());
|
rlm@60
|
465
|
rlm@60
|
466 rf.addListener(auxListener);
|
rlm@60
|
467 WaveFileWriter aux = null;
|
rlm@60
|
468
|
rlm@60
|
469 try {aux = new WaveFileWriter(new File("/home/r/tmp/ear"+n+".wav"));}
|
rlm@60
|
470 catch (FileNotFoundException e) {e.printStackTrace();}
|
rlm@60
|
471
|
rlm@60
|
472 rf.registerSoundProcessor(auxListener,
|
rlm@60
|
473 new CompositeSoundProcessor(new Dancer(ear), aux));
|
rlm@60
|
474 }
|
rlm@60
|
475 }
|
rlm@60
|
476
|
rlm@60
|
477
|
rlm@60
|
478 public void simpleInitApp() {
|
rlm@60
|
479 this.setTimer(new IsoTimer(60));
|
rlm@60
|
480 initAudio();
|
rlm@60
|
481
|
rlm@60
|
482 createScene();
|
rlm@60
|
483
|
rlm@60
|
484 prepareEar(ear1, 1);
|
rlm@60
|
485 prepareEar(ear2, 1);
|
rlm@60
|
486 prepareEar(ear3, 1);
|
rlm@60
|
487
|
rlm@60
|
488 motionControl.play();
|
rlm@60
|
489 }
|
rlm@60
|
490
|
rlm@60
|
491 public void simpleUpdate(float tpf) {
|
rlm@60
|
492 if (music.getStatus() != AudioNode.Status.Playing){
|
rlm@60
|
493 music.play();
|
rlm@60
|
494 }
|
rlm@60
|
495 Vector3f loc = cam.getLocation();
|
rlm@60
|
496 Quaternion rot = cam.getRotation();
|
rlm@60
|
497 listener.setLocation(loc);
|
rlm@60
|
498 listener.setRotation(rot);
|
rlm@60
|
499 music.setLocalTranslation(bell.getLocalTranslation());
|
rlm@60
|
500 }
|
rlm@60
|
501
|
rlm@60
|
502 }
|
rlm@60
|
503 </code>
|
rlm@60
|
504
|
rlm@61
|
505 {{http://www.youtube.com/v/oCEfK0yhDrY?.swf?400×333}}
|
rlm@60
|
506
|
rlm@60
|
507 ===== More Information =====
|
rlm@60
|
508
|
rlm@60
|
509 This is the old page showing the first version of this idea
|
rlm@60
|
510 http://aurellem.org/cortex/html/capture-video.html
|
rlm@60
|
511
|
rlm@60
|
512 All source code can be found here:
|
rlm@60
|
513
|
rlm@60
|
514 http://hg.bortreb.com/audio-send
|
rlm@61
|
515
|
rlm@60
|
516 http://hg.bortreb.com/jmeCapture
|
rlm@60
|
517
|
rlm@60
|
518 More information on the modifications to OpenAL to support multiple
|
rlm@60
|
519 listeners can be found here.
|
rlm@60
|
520
|
rlm@60
|
521 http://aurellem.org/audio-send/html/ear.html
|