rlm@59: ======Capture Audio/Video to a File======
rlm@59:
rlm@59: So you've made your cool new JMonkeyEngine3 game and you want to
rlm@59: create a demo video to show off your hard work. Or maybe you want to
rlm@59: make a cutscene for your game using the physics and characters in the
rlm@59: game itself. Screen capturing is the most straightforward way to do
rlm@59: this, but it can slow down your game and produce low-quality video and
rlm@59: audio as a result. A better way is to record video and audio directly
rlm@59: from the game while it is running.
rlm@59:
rlm@60:
rlm@60: ===== Simple Way =====
rlm@60:
rlm@60: If all you just want to record video at 30fps with no sound, then look
rlm@60: no further then jMonkeyEngine3's build in ''VideoRecorderAppState''
rlm@60: class.
rlm@60:
rlm@60: Add the following code to your simpleInitApp() method.
rlm@60:
rlm@60:
rlm@60: stateManager.attach(new VideoRecorderAppState()); //start recording
rlm@60:
rlm@60:
rlm@60: The game will run slow, but the recording will be in high-quality and
rlm@60: normal speed. The video files will be stored in your user home
rlm@60: directory, if you want to save to another file, specify it in the
rlm@60: VideoRecorderAppState constructor. Recording starts when the state is
rlm@60: attached and ends when the application quits or the state is detached.
rlm@60:
rlm@60: That's all!
rlm@60:
rlm@60: ===== Advanced Way =====
rlm@60:
rlm@60: If you want to record audio as well, record at different framerates,
rlm@60: or record from multiple viewpoints at once, then there's a full
rlm@60: solution for doing this already made for you here:
rlm@59:
rlm@59: http://www.aurellem.com/releases/jmeCapture-latest.zip
rlm@59: http://www.aurellem.com/releases/jmeCapture-latest.tar.bz2
rlm@59:
rlm@59: Download the archive in your preferred format, extract,
rlm@59: add the jars to your project, and you are ready to go.
rlm@59:
rlm@59: The javadoc is here:
rlm@59: http://www.aurellem.com/jmeCapture/docs/
rlm@59:
rlm@60: To capture video and audio you use the
rlm@59: ''com.aurellem.capture.Capture'' class, which has two methods,
rlm@59: ''captureAudio'' and ''captureVideo'', and the
rlm@59: ''com.aurellem.capture.IsoTimer class'', which sets the audio and
rlm@60: video framerate.
rlm@59:
rlm@59: The steps are as simple as:
rlm@59:
rlm@59:
rlm@59: yourApp.setTimer(new IsoTimer(desiredFramesPerSecond));
rlm@59:
rlm@59:
rlm@59: This causes jMonkeyEngine to take as much time as it needs to fully
rlm@59: calculate every frame of the video and audio. You will see your game
rlm@59: speed up and slow down depending on how computationally demanding your
rlm@59: game is, but the final recorded audio and video will be perfectly
rlm@59: sychronized and will run at exactly the fps which you specified.
rlm@59:
rlm@59:
rlm@59: captureVideo(yourApp, targetVideoFile);
rlm@59: captureAudio(yourApp, targetAudioFile);
rlm@59:
rlm@59:
rlm@59: These will cause the app to record audio and video when it is run.
rlm@59: Audio and video will stop being recorded when the app stops. Your
rlm@59: audio will be recorded as a 44,100 Hz linear PCM wav file, while the
rlm@59: video will be recorded according to the following rules:
rlm@59:
rlm@59: 1.) (Preferred) If you supply an empty directory as the file, then
rlm@59: the video will be saved as a sequence of .png files, one file per
rlm@59: frame. The files start at 0000000.png and increment from there.
rlm@59: You can then combine the frames into your preferred
rlm@59: container/codec. If the directory is not empty, then writing
rlm@59: video frames to it will fail, and nothing will be written.
rlm@59:
rlm@59: 2.) If the filename ends in ".avi" then the frames will be encoded as
rlm@59: a RAW stream inside an AVI 1.0 container. The resulting file
rlm@59: will be quite large and you will probably want to re-encode it to
rlm@59: your preferred container/codec format. Be advised that some
rlm@59: video payers cannot process AVI with a RAW stream, and that AVI
rlm@59: 1.0 files generated by this method that exceed 2.0GB are invalid
rlm@59: according to the AVI 1.0 spec (but many programs can still deal
rlm@59: with them.) Thanks to Werner Randelshofer for his excellent work
rlm@60: which made the AVI file writer option possible.
rlm@59:
rlm@59: 3.) Any non-directory file ending in anything other than ".avi" will
rlm@59: be processed through Xuggle. Xuggle provides the option to use
rlm@59: many codecs/containers, but you will have to install it on your
rlm@59: system yourself in order to use this option. Please visit
rlm@59: http://www.xuggle.com/ to learn how to do this.
rlm@59:
rlm@60: Note that you will not hear any sound if you choose to record sound to
rlm@60: a file.
rlm@60:
rlm@60: ==== Basic Example ====
rlm@60:
rlm@60: Here is a complete example showing how to capture both audio and video
rlm@60: from one of jMonkeyEngine3's advanced demo applications.
rlm@60:
rlm@60:
rlm@60: import java.io.File;
rlm@60: import java.io.IOException;
rlm@60:
rlm@60: import jme3test.water.TestPostWater;
rlm@60:
rlm@60: import com.aurellem.capture.Capture;
rlm@60: import com.aurellem.capture.IsoTimer;
rlm@60: import com.jme3.app.SimpleApplication;
rlm@60:
rlm@60:
rlm@60: /**
rlm@60: * Demonstrates how to use basic Audio/Video capture with a
rlm@60: * jMonkeyEngine application. You can use these techniques to make
rlm@60: * high quality cutscenes or demo videos, even on very slow laptops.
rlm@60: *
rlm@60: * @author Robert McIntyre
rlm@60: */
rlm@60:
rlm@60: public class Basic {
rlm@60:
rlm@60: public static void main(String[] ignore) throws IOException{
rlm@60: File video = File.createTempFile("JME-water-video", ".avi");
rlm@60: File audio = File.createTempFile("JME-water-audio", ".wav");
rlm@60:
rlm@60: SimpleApplication app = new TestPostWater();
rlm@60: app.setTimer(new IsoTimer(60));
rlm@60: app.setShowSettings(false);
rlm@60:
rlm@60: Capture.captureVideo(app, video);
rlm@60: Capture.captureAudio(app, audio);
rlm@60:
rlm@60: app.start();
rlm@60:
rlm@60: System.out.println(video.getCanonicalPath());
rlm@60: System.out.println(audio.getCanonicalPath());
rlm@60: }
rlm@60: }
rlm@60:
rlm@60:
rlm@60: ==== How it works ====
rlm@60:
rlm@60: A standard JME3 application that extends =SimpleApplication= or
rlm@60: =Application= tries as hard as it can to keep in sync with
rlm@60: /user-time/. If a ball is rolling at 1 game-mile per game-hour in the
rlm@60: game, and you wait for one user-hour as measured by the clock on your
rlm@60: wall, then the ball should have traveled exactly one game-mile. In
rlm@60: order to keep sync with the real world, the game throttles its physics
rlm@60: engine and graphics display. If the computations involved in running
rlm@60: the game are too intense, then the game will first skip frames, then
rlm@60: sacrifice physics accuracy. If there are particuraly demanding
rlm@60: computations, then you may only get 1 fps, and the ball may tunnel
rlm@60: through the floor or obstacles due to inaccurate physics simulation,
rlm@60: but after the end of one user-hour, that ball will have traveled one
rlm@60: game-mile.
rlm@60:
rlm@60: When we're recording video, we don't care if the game-time syncs with
rlm@60: user-time, but instead whether the time in the recorded video
rlm@60: (video-time) syncs with user-time. To continue the analogy, if we
rlm@60: recorded the ball rolling at 1 game-mile per game-hour and watched the
rlm@60: video later, we would want to see 30 fps video of the ball rolling at
rlm@60: 1 video-mile per /user-hour/. It doesn't matter how much user-time it
rlm@60: took to simulate that hour of game-time to make the high-quality
rlm@60: recording.
rlm@60:
rlm@60: The IsoTimer ignores real-time and always reports that the same amount
rlm@60: of time has passed every time it is called. That way, one can put code
rlm@60: to write each video/audio frame to a file without worrying about that
rlm@60: code itself slowing down the game to the point where the recording
rlm@60: would be useless.
rlm@60:
rlm@60:
rlm@60: ==== Advanced Example ====
rlm@60:
rlm@60: The package from aurellem.com was made for AI research and can do more
rlm@60: than just record a single stream of audio and video. You can use it
rlm@60: to:
rlm@60:
rlm@60: 1.) Create multiple independent listeners that each hear the world
rlm@60: from their own perspective.
rlm@60:
rlm@60: 2.) Process the sound data in any way you wish.
rlm@60:
rlm@60: 3.) Do the same for visual data.
rlm@60:
rlm@60: Here is a more advanced example, which can also be found along with
rlm@60: other examples in the jmeCapture.jar file included in the
rlm@60: distribution.
rlm@60:
rlm@60:
rlm@60: package com.aurellem.capture.examples;
rlm@60:
rlm@60: import java.io.File;
rlm@60: import java.io.FileNotFoundException;
rlm@60: import java.io.IOException;
rlm@60: import java.lang.reflect.Field;
rlm@60: import java.nio.ByteBuffer;
rlm@60:
rlm@60: import javax.sound.sampled.AudioFormat;
rlm@60:
rlm@60: import org.tritonus.share.sampled.FloatSampleTools;
rlm@60:
rlm@60: import com.aurellem.capture.AurellemSystemDelegate;
rlm@60: import com.aurellem.capture.Capture;
rlm@60: import com.aurellem.capture.IsoTimer;
rlm@60: import com.aurellem.capture.audio.CompositeSoundProcessor;
rlm@60: import com.aurellem.capture.audio.MultiListener;
rlm@60: import com.aurellem.capture.audio.SoundProcessor;
rlm@60: import com.aurellem.capture.audio.WaveFileWriter;
rlm@60: import com.jme3.app.SimpleApplication;
rlm@60: import com.jme3.audio.AudioNode;
rlm@60: import com.jme3.audio.Listener;
rlm@60: import com.jme3.cinematic.MotionPath;
rlm@60: import com.jme3.cinematic.events.AbstractCinematicEvent;
rlm@60: import com.jme3.cinematic.events.MotionTrack;
rlm@60: import com.jme3.material.Material;
rlm@60: import com.jme3.math.ColorRGBA;
rlm@60: import com.jme3.math.FastMath;
rlm@60: import com.jme3.math.Quaternion;
rlm@60: import com.jme3.math.Vector3f;
rlm@60: import com.jme3.scene.Geometry;
rlm@60: import com.jme3.scene.Node;
rlm@60: import com.jme3.scene.shape.Box;
rlm@60: import com.jme3.scene.shape.Sphere;
rlm@60: import com.jme3.system.AppSettings;
rlm@60: import com.jme3.system.JmeSystem;
rlm@60:
rlm@60: /**
rlm@60: *
rlm@60: * Demonstrates advanced use of the audio capture and recording
rlm@60: * features. Multiple perspectives of the same scene are
rlm@60: * simultaneously rendered to different sound files.
rlm@60: *
rlm@60: * A key limitation of the way multiple listeners are implemented is
rlm@60: * that only 3D positioning effects are realized for listeners other
rlm@60: * than the main LWJGL listener. This means that audio effects such
rlm@60: * as environment settings will *not* be heard on any auxiliary
rlm@60: * listeners, though sound attenuation will work correctly.
rlm@60: *
rlm@60: * Multiple listeners as realized here might be used to make AI
rlm@60: * entities that can each hear the world from their own perspective.
rlm@60: *
rlm@60: * @author Robert McIntyre
rlm@60: */
rlm@60:
rlm@60: public class Advanced extends SimpleApplication {
rlm@60:
rlm@60: /**
rlm@60: * You will see three grey cubes, a blue sphere, and a path which
rlm@60: * circles each cube. The blue sphere is generating a constant
rlm@60: * monotone sound as it moves along the track. Each cube is
rlm@60: * listening for sound; when a cube hears sound whose intensity is
rlm@60: * greater than a certain threshold, it changes its color from
rlm@60: * grey to green.
rlm@60: *
rlm@60: * Each cube is also saving whatever it hears to a file. The
rlm@60: * scene from the perspective of the viewer is also saved to a
rlm@60: * video file. When you listen to each of the sound files
rlm@60: * alongside the video, the sound will get louder when the sphere
rlm@60: * approaches the cube that generated that sound file. This
rlm@60: * shows that each listener is hearing the world from its own
rlm@60: * perspective.
rlm@60: *
rlm@60: */
rlm@60: public static void main(String[] args) {
rlm@60: Advanced app = new Advanced();
rlm@60: AppSettings settings = new AppSettings(true);
rlm@60: settings.setAudioRenderer(AurellemSystemDelegate.SEND);
rlm@60: JmeSystem.setSystemDelegate(new AurellemSystemDelegate());
rlm@60: app.setSettings(settings);
rlm@60: app.setShowSettings(false);
rlm@60: app.setPauseOnLostFocus(false);
rlm@60:
rlm@60: try {
rlm@60: Capture.captureVideo(app, File.createTempFile("advanced",".avi"));
rlm@60: Capture.captureAudio(app, File.createTempFile("advanced", ".wav"));
rlm@60: }
rlm@60: catch (IOException e) {e.printStackTrace();}
rlm@60:
rlm@60: app.start();
rlm@60: }
rlm@60:
rlm@60:
rlm@60: private Geometry bell;
rlm@60: private Geometry ear1;
rlm@60: private Geometry ear2;
rlm@60: private Geometry ear3;
rlm@60: private AudioNode music;
rlm@60: private MotionTrack motionControl;
rlm@60:
rlm@60: private Geometry makeEar(Node root, Vector3f position){
rlm@60: Material mat = new Material(assetManager, "Common/MatDefs/Misc/Unshaded.j3md");
rlm@60: Geometry ear = new Geometry("ear", new Box(1.0f, 1.0f, 1.0f));
rlm@60: ear.setLocalTranslation(position);
rlm@60: mat.setColor("Color", ColorRGBA.Green);
rlm@60: ear.setMaterial(mat);
rlm@60: root.attachChild(ear);
rlm@60: return ear;
rlm@60: }
rlm@60:
rlm@60: private Vector3f[] path = new Vector3f[]{
rlm@60: // loop 1
rlm@60: new Vector3f(0, 0, 0),
rlm@60: new Vector3f(0, 0, -10),
rlm@60: new Vector3f(-2, 0, -14),
rlm@60: new Vector3f(-6, 0, -20),
rlm@60: new Vector3f(0, 0, -26),
rlm@60: new Vector3f(6, 0, -20),
rlm@60: new Vector3f(0, 0, -14),
rlm@60: new Vector3f(-6, 0, -20),
rlm@60: new Vector3f(0, 0, -26),
rlm@60: new Vector3f(6, 0, -20),
rlm@60: // loop 2
rlm@60: new Vector3f(5, 0, -5),
rlm@60: new Vector3f(7, 0, 1.5f),
rlm@60: new Vector3f(14, 0, 2),
rlm@60: new Vector3f(20, 0, 6),
rlm@60: new Vector3f(26, 0, 0),
rlm@60: new Vector3f(20, 0, -6),
rlm@60: new Vector3f(14, 0, 0),
rlm@60: new Vector3f(20, 0, 6),
rlm@60: new Vector3f(26, 0, 0),
rlm@60: new Vector3f(20, 0, -6),
rlm@60: new Vector3f(14, 0, 0),
rlm@60: // loop 3
rlm@60: new Vector3f(8, 0, 7.5f),
rlm@60: new Vector3f(7, 0, 10.5f),
rlm@60: new Vector3f(6, 0, 20),
rlm@60: new Vector3f(0, 0, 26),
rlm@60: new Vector3f(-6, 0, 20),
rlm@60: new Vector3f(0, 0, 14),
rlm@60: new Vector3f(6, 0, 20),
rlm@60: new Vector3f(0, 0, 26),
rlm@60: new Vector3f(-6, 0, 20),
rlm@60: new Vector3f(0, 0, 14),
rlm@60: // begin ellipse
rlm@60: new Vector3f(16, 5, 20),
rlm@60: new Vector3f(0, 0, 26),
rlm@60: new Vector3f(-16, -10, 20),
rlm@60: new Vector3f(0, 0, 14),
rlm@60: new Vector3f(16, 20, 20),
rlm@60: new Vector3f(0, 0, 26),
rlm@60: new Vector3f(-10, -25, 10),
rlm@60: new Vector3f(-10, 0, 0),
rlm@60: // come at me!
rlm@60: new Vector3f(-28.00242f, 48.005623f, -34.648228f),
rlm@60: new Vector3f(0, 0 , -20),
rlm@60: };
rlm@60:
rlm@60: private void createScene() {
rlm@60: Material mat = new Material(assetManager, "Common/MatDefs/Misc/Unshaded.j3md");
rlm@60: bell = new Geometry( "sound-emitter" , new Sphere(15,15,1));
rlm@60: mat.setColor("Color", ColorRGBA.Blue);
rlm@60: bell.setMaterial(mat);
rlm@60: rootNode.attachChild(bell);
rlm@60:
rlm@60: ear1 = makeEar(rootNode, new Vector3f(0, 0 ,-20));
rlm@60: ear2 = makeEar(rootNode, new Vector3f(0, 0 ,20));
rlm@60: ear3 = makeEar(rootNode, new Vector3f(20, 0 ,0));
rlm@60:
rlm@60: MotionPath track = new MotionPath();
rlm@60:
rlm@60: for (Vector3f v : path){
rlm@60: track.addWayPoint(v);
rlm@60: }
rlm@60: track.setCurveTension(0.80f);
rlm@60:
rlm@60: motionControl = new MotionTrack(bell,track);
rlm@60:
rlm@60: // for now, use reflection to change the timer...
rlm@60: // motionControl.setTimer(new IsoTimer(60));
rlm@60: try {
rlm@60: Field timerField;
rlm@60: timerField = AbstractCinematicEvent.class.getDeclaredField("timer");
rlm@60: timerField.setAccessible(true);
rlm@60: try {timerField.set(motionControl, new IsoTimer(60));}
rlm@60: catch (IllegalArgumentException e) {e.printStackTrace();}
rlm@60: catch (IllegalAccessException e) {e.printStackTrace();}
rlm@60: }
rlm@60: catch (SecurityException e) {e.printStackTrace();}
rlm@60: catch (NoSuchFieldException e) {e.printStackTrace();}
rlm@60:
rlm@60: motionControl.setDirectionType(MotionTrack.Direction.PathAndRotation);
rlm@60: motionControl.setRotation(new Quaternion().fromAngleNormalAxis(-FastMath.HALF_PI, Vector3f.UNIT_Y));
rlm@60: motionControl.setInitialDuration(20f);
rlm@60: motionControl.setSpeed(1f);
rlm@60:
rlm@60: track.enableDebugShape(assetManager, rootNode);
rlm@60: positionCamera();
rlm@60: }
rlm@60:
rlm@60:
rlm@60: private void positionCamera(){
rlm@60: this.cam.setLocation(new Vector3f(-28.00242f, 48.005623f, -34.648228f));
rlm@60: this.cam.setRotation(new Quaternion(0.3359635f, 0.34280345f, -0.13281013f, 0.8671653f));
rlm@60: }
rlm@60:
rlm@60: private void initAudio() {
rlm@60: org.lwjgl.input.Mouse.setGrabbed(false);
rlm@60: music = new AudioNode(assetManager, "Sound/Effects/Beep.ogg", false);
rlm@60:
rlm@60: rootNode.attachChild(music);
rlm@60: audioRenderer.playSource(music);
rlm@60: music.setPositional(true);
rlm@60: music.setVolume(1f);
rlm@60: music.setReverbEnabled(false);
rlm@60: music.setDirectional(false);
rlm@60: music.setMaxDistance(200.0f);
rlm@60: music.setRefDistance(1f);
rlm@60: //music.setRolloffFactor(1f);
rlm@60: music.setLooping(false);
rlm@60: audioRenderer.pauseSource(music);
rlm@60: }
rlm@60:
rlm@60: public class Dancer implements SoundProcessor {
rlm@60: Geometry entity;
rlm@60: float scale = 2;
rlm@60: public Dancer(Geometry entity){
rlm@60: this.entity = entity;
rlm@60: }
rlm@60:
rlm@60: /**
rlm@60: * this method is irrelevant since there is no state to cleanup.
rlm@60: */
rlm@60: public void cleanup() {}
rlm@60:
rlm@60:
rlm@60: /**
rlm@60: * Respond to sound! This is the brain of an AI entity that
rlm@60: * hears it's surroundings and reacts to them.
rlm@60: */
rlm@60: public void process(ByteBuffer audioSamples, int numSamples, AudioFormat format) {
rlm@60: audioSamples.clear();
rlm@60: byte[] data = new byte[numSamples];
rlm@60: float[] out = new float[numSamples];
rlm@60: audioSamples.get(data);
rlm@60: FloatSampleTools.byte2floatInterleaved(data, 0, out, 0,
rlm@60: numSamples/format.getFrameSize(), format);
rlm@60:
rlm@60: float max = Float.NEGATIVE_INFINITY;
rlm@60: for (float f : out){if (f > max) max = f;}
rlm@60: audioSamples.clear();
rlm@60:
rlm@60: if (max > 0.1){entity.getMaterial().setColor("Color", ColorRGBA.Green);}
rlm@60: else {entity.getMaterial().setColor("Color", ColorRGBA.Gray);}
rlm@60: }
rlm@60: }
rlm@60:
rlm@60: private void prepareEar(Geometry ear, int n){
rlm@60: if (this.audioRenderer instanceof MultiListener){
rlm@60: MultiListener rf = (MultiListener)this.audioRenderer;
rlm@60:
rlm@60: Listener auxListener = new Listener();
rlm@60: auxListener.setLocation(ear.getLocalTranslation());
rlm@60:
rlm@60: rf.addListener(auxListener);
rlm@60: WaveFileWriter aux = null;
rlm@60:
rlm@60: try {aux = new WaveFileWriter(new File("/home/r/tmp/ear"+n+".wav"));}
rlm@60: catch (FileNotFoundException e) {e.printStackTrace();}
rlm@60:
rlm@60: rf.registerSoundProcessor(auxListener,
rlm@60: new CompositeSoundProcessor(new Dancer(ear), aux));
rlm@60: }
rlm@60: }
rlm@60:
rlm@60:
rlm@60: public void simpleInitApp() {
rlm@60: this.setTimer(new IsoTimer(60));
rlm@60: initAudio();
rlm@60:
rlm@60: createScene();
rlm@60:
rlm@60: prepareEar(ear1, 1);
rlm@60: prepareEar(ear2, 1);
rlm@60: prepareEar(ear3, 1);
rlm@60:
rlm@60: motionControl.play();
rlm@60: }
rlm@60:
rlm@60: public void simpleUpdate(float tpf) {
rlm@60: if (music.getStatus() != AudioNode.Status.Playing){
rlm@60: music.play();
rlm@60: }
rlm@60: Vector3f loc = cam.getLocation();
rlm@60: Quaternion rot = cam.getRotation();
rlm@60: listener.setLocation(loc);
rlm@60: listener.setRotation(rot);
rlm@60: music.setLocalTranslation(bell.getLocalTranslation());
rlm@60: }
rlm@60:
rlm@60: }
rlm@60:
rlm@60:
rlm@60:
rlm@60:
rlm@60: ===== More Information =====
rlm@60:
rlm@60: This is the old page showing the first version of this idea
rlm@60: http://aurellem.org/cortex/html/capture-video.html
rlm@60:
rlm@60: All source code can be found here:
rlm@60:
rlm@60: http://hg.bortreb.com/audio-send
rlm@60: http://hg.bortreb.com/jmeCapture
rlm@60:
rlm@60: More information on the modifications to OpenAL to support multiple
rlm@60: listeners can be found here.
rlm@60:
rlm@60: http://aurellem.org/audio-send/html/ear.html