changeset 213:319963720179

fleshing out vision
author Robert McIntyre <rlm@mit.edu>
date Thu, 09 Feb 2012 08:11:10 -0700
parents 8e9825c38941
children 01d3e9855ef9
files org/vision.org
diffstat 1 files changed, 99 insertions(+), 57 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/org/vision.org	Thu Feb 09 07:39:21 2012 -0700
     1.2 +++ b/org/vision.org	Thu Feb 09 08:11:10 2012 -0700
     1.3 @@ -26,64 +26,33 @@
     1.4  #+caption: jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye
     1.5  [[../images/goldeneye-4-player.png]]
     1.6  
     1.7 +* Brief Description of jMonkeyEngine's Rendering Pipeline
     1.8  
     1.9 +jMonkeyEngine allows you to create a =ViewPort=, which represents a
    1.10 +view of the simulated world. You can create as many of these as you
    1.11 +want. Every frame, the =RenderManager= iterates through each
    1.12 +=ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
    1.13 +is a =FrameBuffer= which represents the rendered image in the GPU.
    1.14  
    1.15 -Make the continuation in scene-processor take FrameBuffer,
    1.16 -byte-buffer, BufferedImage already sized to the correct
    1.17 -dimensions. the continuation will decide wether to "mix" them
    1.18 -into the BufferedImage, lazily ignore them, or mix them halfway
    1.19 -and call c/graphics card routines.
    1.20 +Each =ViewPort= can have any number of attached =SceneProcessor=
    1.21 +objects, which are called every time a new frame is rendered. A
    1.22 +=SceneProcessor= recieves a =FrameBuffer= and can do whatever it wants
    1.23 +to the data.  Often this consists of invoking GPU specific operations
    1.24 +on the rendered image.  The =SceneProcessor= can also copy the GPU
    1.25 +image data to RAM and process it with the CPU.
    1.26  
    1.27 -(vision creature) will take an optional :skip argument which will
    1.28 -inform the continuations in scene processor to skip the given
    1.29 -number of cycles 0 means that no cycles will be skipped.
    1.30 +* The Vision Pipeline
    1.31  
    1.32 -(vision creature) will return [init-functions sensor-functions].
    1.33 -The init-functions are each single-arg functions that take the
    1.34 -world and register the cameras and must each be called before the
    1.35 -corresponding sensor-functions.  Each init-function returns the
    1.36 -viewport for that eye which can be manipulated, saved, etc. Each
    1.37 -sensor-function is a thunk and will return data in the same
    1.38 -format as the tactile-sensor functions the structure is
    1.39 -[topology, sensor-data]. Internally, these sensor-functions
    1.40 -maintain a reference to sensor-data which is periodically updated
    1.41 -by the continuation function established by its init-function.
    1.42 -They can be queried every cycle, but their information may not
    1.43 -necessairly be different every cycle.
    1.44 -
    1.45 -Each eye in the creature in blender will work the same way as
    1.46 -joints -- a zero dimensional object with no geometry whose local
    1.47 -coordinate system determines the orientation of the resulting
    1.48 -eye. All eyes will have a parent named "eyes" just as all joints
    1.49 -have a parent named "joints". The resulting camera will be a
    1.50 -ChaseCamera or a CameraNode bound to the geo that is closest to
    1.51 -the eye marker. The eye marker will contain the metadata for the
    1.52 -eye, and will be moved by it's bound geometry. The dimensions of
    1.53 -the eye's camera are equal to the dimensions of the eye's "UV"
    1.54 -map.
    1.55 -
    1.56 -#+name: eyes
    1.57 -#+begin_src clojure 
    1.58 -(ns cortex.vision
    1.59 -  "Simulate the sense of vision in jMonkeyEngine3. Enables multiple
    1.60 -  eyes from different positions to observe the same world, and pass
    1.61 -  the observed data to any arbitray function. Automatically reads
    1.62 -  eye-nodes from specially prepared blender files and instanttiates
    1.63 -  them in the world as actual eyes."
    1.64 -  {:author "Robert McIntyre"}
    1.65 -  (:use (cortex world sense util))
    1.66 -  (:use clojure.contrib.def)
    1.67 -  (:import com.jme3.post.SceneProcessor)
    1.68 -  (:import (com.jme3.util BufferUtils Screenshots))
    1.69 -  (:import java.nio.ByteBuffer)
    1.70 -  (:import java.awt.image.BufferedImage)
    1.71 -  (:import (com.jme3.renderer ViewPort Camera))
    1.72 -  (:import com.jme3.math.ColorRGBA)
    1.73 -  (:import com.jme3.renderer.Renderer)
    1.74 -  (:import com.jme3.app.Application)
    1.75 -  (:import com.jme3.texture.FrameBuffer)
    1.76 -  (:import (com.jme3.scene Node Spatial)))
    1.77 -
    1.78 +Each eye in the simulated creature needs it's own =ViewPort= so that
    1.79 +it can see the world from its own perspective. To this =ViewPort=, I
    1.80 +add a =SceneProcessor= that feeds the visual data to any arbitra
    1.81 +continuation function for further processing.  That continuation
    1.82 +function may perform both CPU and GPU operations on the data. To make
    1.83 +this easy for the continuation function, the =SceneProcessor=
    1.84 +maintains appropriatly sized buffers in RAM to hold the data.  It does
    1.85 +not do any copying from the GPU to the CPU itself.
    1.86 +#+name: pipeline-1
    1.87 +#+begin_src clojure
    1.88  (defn vision-pipeline
    1.89    "Create a SceneProcessor object which wraps a vision processing
    1.90    continuation function. The continuation is a function that takes 
    1.91 @@ -115,7 +84,19 @@
    1.92       (.clear @byte-buffer)
    1.93       (continuation @renderer fb @byte-buffer @image))
    1.94      (cleanup []))))
    1.95 -    
    1.96 +#+end_src
    1.97 +
    1.98 +The continuation function given to =(vision-pipeline)= above will be
    1.99 +given a =Renderer= and three containers for image data. The
   1.100 +=FrameBuffer= references the GPU image data, but it can not be used
   1.101 +directly on the CPU.  The =ByteBuffer= and =BufferedImage= are
   1.102 +initially "empty" but are sized to hold to data in the
   1.103 +=FrameBuffer=. I call transfering the GPU image data to the CPU
   1.104 +structures "mixing" the image data. I have provided three functions to
   1.105 +do this mixing.
   1.106 +
   1.107 +#+name: pipeline-2
   1.108 +#+begin_src clojure
   1.109  (defn frameBuffer->byteBuffer!
   1.110    "Transfer the data in the graphics card (Renderer, FrameBuffer) to
   1.111     the CPU (ByteBuffer)."  
   1.112 @@ -134,7 +115,48 @@
   1.113    [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb #^BufferedImage bi]
   1.114    (byteBuffer->bufferedImage!
   1.115     (frameBuffer->byteBuffer! r fb bb) bi))
   1.116 +#+end_src
   1.117  
   1.118 +Note that it is possible to write vision processing algorithms
   1.119 +entirely in terms of =BufferedImage= inputs. Just compose that
   1.120 +=BufferedImage= algorithm with =(BufferedImage!)=. However, a vision
   1.121 +processing algorithm that is entirely hosted on the GPU does not have
   1.122 +to pay for this convienence.
   1.123 +
   1.124 +
   1.125 +* Physical Eyes
   1.126 +
   1.127 +The vision pipeline described above only deals with 
   1.128 +Each eye in the creature in blender will work the same way as
   1.129 +joints -- a zero dimensional object with no geometry whose local
   1.130 +coordinate system determines the orientation of the resulting
   1.131 +eye. All eyes will have a parent named "eyes" just as all joints
   1.132 +have a parent named "joints". The resulting camera will be a
   1.133 +ChaseCamera or a CameraNode bound to the geo that is closest to
   1.134 +the eye marker. The eye marker will contain the metadata for the
   1.135 +eye, and will be moved by it's bound geometry. The dimensions of
   1.136 +the eye's camera are equal to the dimensions of the eye's "UV"
   1.137 +map.
   1.138 +
   1.139 +(vision creature) will take an optional :skip argument which will
   1.140 +inform the continuations in scene processor to skip the given
   1.141 +number of cycles 0 means that no cycles will be skipped.
   1.142 +
   1.143 +(vision creature) will return [init-functions sensor-functions].
   1.144 +The init-functions are each single-arg functions that take the
   1.145 +world and register the cameras and must each be called before the
   1.146 +corresponding sensor-functions.  Each init-function returns the
   1.147 +viewport for that eye which can be manipulated, saved, etc. Each
   1.148 +sensor-function is a thunk and will return data in the same
   1.149 +format as the tactile-sensor functions the structure is
   1.150 +[topology, sensor-data]. Internally, these sensor-functions
   1.151 +maintain a reference to sensor-data which is periodically updated
   1.152 +by the continuation function established by its init-function.
   1.153 +They can be queried every cycle, but their information may not
   1.154 +necessairly be different every cycle.
   1.155 +
   1.156 +
   1.157 +#+begin_src clojure
   1.158  (defn add-camera!
   1.159    "Add a camera to the world, calling continuation on every frame
   1.160    produced." 
   1.161 @@ -326,8 +348,28 @@
   1.162         (.rotate candy (* tpf 0.2) 0 0)))))
   1.163  #+end_src
   1.164  
   1.165 -#+results: test-vision
   1.166 -: #'cortex.test.vision/test-two-eyes
   1.167 +#+name: vision-header
   1.168 +#+begin_src clojure 
   1.169 +(ns cortex.vision
   1.170 +  "Simulate the sense of vision in jMonkeyEngine3. Enables multiple
   1.171 +  eyes from different positions to observe the same world, and pass
   1.172 +  the observed data to any arbitray function. Automatically reads
   1.173 +  eye-nodes from specially prepared blender files and instanttiates
   1.174 +  them in the world as actual eyes."
   1.175 +  {:author "Robert McIntyre"}
   1.176 +  (:use (cortex world sense util))
   1.177 +  (:use clojure.contrib.def)
   1.178 +  (:import com.jme3.post.SceneProcessor)
   1.179 +  (:import (com.jme3.util BufferUtils Screenshots))
   1.180 +  (:import java.nio.ByteBuffer)
   1.181 +  (:import java.awt.image.BufferedImage)
   1.182 +  (:import (com.jme3.renderer ViewPort Camera))
   1.183 +  (:import com.jme3.math.ColorRGBA)
   1.184 +  (:import com.jme3.renderer.Renderer)
   1.185 +  (:import com.jme3.app.Application)
   1.186 +  (:import com.jme3.texture.FrameBuffer)
   1.187 +  (:import (com.jme3.scene Node Spatial)))
   1.188 +#+end_src
   1.189  
   1.190  The example code will create two videos of the same rotating object
   1.191  from different angles. It can be used both for stereoscopic vision