Mercurial > cortex
comparison org/vision.org @ 213:319963720179
fleshing out vision
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Thu, 09 Feb 2012 08:11:10 -0700 |
parents | 8e9825c38941 |
children | 01d3e9855ef9 |
comparison
equal
deleted
inserted
replaced
212:8e9825c38941 | 213:319963720179 |
---|---|
24 and then projecting it back onto a surface in the 3D world. | 24 and then projecting it back onto a surface in the 3D world. |
25 | 25 |
26 #+caption: jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye | 26 #+caption: jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye |
27 [[../images/goldeneye-4-player.png]] | 27 [[../images/goldeneye-4-player.png]] |
28 | 28 |
29 | 29 * Brief Description of jMonkeyEngine's Rendering Pipeline |
30 | 30 |
31 Make the continuation in scene-processor take FrameBuffer, | 31 jMonkeyEngine allows you to create a =ViewPort=, which represents a |
32 byte-buffer, BufferedImage already sized to the correct | 32 view of the simulated world. You can create as many of these as you |
33 dimensions. the continuation will decide wether to "mix" them | 33 want. Every frame, the =RenderManager= iterates through each |
34 into the BufferedImage, lazily ignore them, or mix them halfway | 34 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there |
35 and call c/graphics card routines. | 35 is a =FrameBuffer= which represents the rendered image in the GPU. |
36 | 36 |
37 (vision creature) will take an optional :skip argument which will | 37 Each =ViewPort= can have any number of attached =SceneProcessor= |
38 inform the continuations in scene processor to skip the given | 38 objects, which are called every time a new frame is rendered. A |
39 number of cycles 0 means that no cycles will be skipped. | 39 =SceneProcessor= recieves a =FrameBuffer= and can do whatever it wants |
40 | 40 to the data. Often this consists of invoking GPU specific operations |
41 (vision creature) will return [init-functions sensor-functions]. | 41 on the rendered image. The =SceneProcessor= can also copy the GPU |
42 The init-functions are each single-arg functions that take the | 42 image data to RAM and process it with the CPU. |
43 world and register the cameras and must each be called before the | 43 |
44 corresponding sensor-functions. Each init-function returns the | 44 * The Vision Pipeline |
45 viewport for that eye which can be manipulated, saved, etc. Each | 45 |
46 sensor-function is a thunk and will return data in the same | 46 Each eye in the simulated creature needs it's own =ViewPort= so that |
47 format as the tactile-sensor functions the structure is | 47 it can see the world from its own perspective. To this =ViewPort=, I |
48 [topology, sensor-data]. Internally, these sensor-functions | 48 add a =SceneProcessor= that feeds the visual data to any arbitra |
49 maintain a reference to sensor-data which is periodically updated | 49 continuation function for further processing. That continuation |
50 by the continuation function established by its init-function. | 50 function may perform both CPU and GPU operations on the data. To make |
51 They can be queried every cycle, but their information may not | 51 this easy for the continuation function, the =SceneProcessor= |
52 necessairly be different every cycle. | 52 maintains appropriatly sized buffers in RAM to hold the data. It does |
53 | 53 not do any copying from the GPU to the CPU itself. |
54 Each eye in the creature in blender will work the same way as | 54 #+name: pipeline-1 |
55 joints -- a zero dimensional object with no geometry whose local | 55 #+begin_src clojure |
56 coordinate system determines the orientation of the resulting | |
57 eye. All eyes will have a parent named "eyes" just as all joints | |
58 have a parent named "joints". The resulting camera will be a | |
59 ChaseCamera or a CameraNode bound to the geo that is closest to | |
60 the eye marker. The eye marker will contain the metadata for the | |
61 eye, and will be moved by it's bound geometry. The dimensions of | |
62 the eye's camera are equal to the dimensions of the eye's "UV" | |
63 map. | |
64 | |
65 #+name: eyes | |
66 #+begin_src clojure | |
67 (ns cortex.vision | |
68 "Simulate the sense of vision in jMonkeyEngine3. Enables multiple | |
69 eyes from different positions to observe the same world, and pass | |
70 the observed data to any arbitray function. Automatically reads | |
71 eye-nodes from specially prepared blender files and instanttiates | |
72 them in the world as actual eyes." | |
73 {:author "Robert McIntyre"} | |
74 (:use (cortex world sense util)) | |
75 (:use clojure.contrib.def) | |
76 (:import com.jme3.post.SceneProcessor) | |
77 (:import (com.jme3.util BufferUtils Screenshots)) | |
78 (:import java.nio.ByteBuffer) | |
79 (:import java.awt.image.BufferedImage) | |
80 (:import (com.jme3.renderer ViewPort Camera)) | |
81 (:import com.jme3.math.ColorRGBA) | |
82 (:import com.jme3.renderer.Renderer) | |
83 (:import com.jme3.app.Application) | |
84 (:import com.jme3.texture.FrameBuffer) | |
85 (:import (com.jme3.scene Node Spatial))) | |
86 | |
87 (defn vision-pipeline | 56 (defn vision-pipeline |
88 "Create a SceneProcessor object which wraps a vision processing | 57 "Create a SceneProcessor object which wraps a vision processing |
89 continuation function. The continuation is a function that takes | 58 continuation function. The continuation is a function that takes |
90 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi], | 59 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi], |
91 each of which has already been appropiately sized." | 60 each of which has already been appropiately sized." |
113 (postFrame | 82 (postFrame |
114 [#^FrameBuffer fb] | 83 [#^FrameBuffer fb] |
115 (.clear @byte-buffer) | 84 (.clear @byte-buffer) |
116 (continuation @renderer fb @byte-buffer @image)) | 85 (continuation @renderer fb @byte-buffer @image)) |
117 (cleanup [])))) | 86 (cleanup [])))) |
118 | 87 #+end_src |
88 | |
89 The continuation function given to =(vision-pipeline)= above will be | |
90 given a =Renderer= and three containers for image data. The | |
91 =FrameBuffer= references the GPU image data, but it can not be used | |
92 directly on the CPU. The =ByteBuffer= and =BufferedImage= are | |
93 initially "empty" but are sized to hold to data in the | |
94 =FrameBuffer=. I call transfering the GPU image data to the CPU | |
95 structures "mixing" the image data. I have provided three functions to | |
96 do this mixing. | |
97 | |
98 #+name: pipeline-2 | |
99 #+begin_src clojure | |
119 (defn frameBuffer->byteBuffer! | 100 (defn frameBuffer->byteBuffer! |
120 "Transfer the data in the graphics card (Renderer, FrameBuffer) to | 101 "Transfer the data in the graphics card (Renderer, FrameBuffer) to |
121 the CPU (ByteBuffer)." | 102 the CPU (ByteBuffer)." |
122 [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb] | 103 [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb] |
123 (.readFrameBuffer r fb bb) bb) | 104 (.readFrameBuffer r fb bb) bb) |
132 "Continuation which will grab the buffered image from the materials | 113 "Continuation which will grab the buffered image from the materials |
133 provided by (vision-pipeline)." | 114 provided by (vision-pipeline)." |
134 [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb #^BufferedImage bi] | 115 [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb #^BufferedImage bi] |
135 (byteBuffer->bufferedImage! | 116 (byteBuffer->bufferedImage! |
136 (frameBuffer->byteBuffer! r fb bb) bi)) | 117 (frameBuffer->byteBuffer! r fb bb) bi)) |
137 | 118 #+end_src |
119 | |
120 Note that it is possible to write vision processing algorithms | |
121 entirely in terms of =BufferedImage= inputs. Just compose that | |
122 =BufferedImage= algorithm with =(BufferedImage!)=. However, a vision | |
123 processing algorithm that is entirely hosted on the GPU does not have | |
124 to pay for this convienence. | |
125 | |
126 | |
127 * Physical Eyes | |
128 | |
129 The vision pipeline described above only deals with | |
130 Each eye in the creature in blender will work the same way as | |
131 joints -- a zero dimensional object with no geometry whose local | |
132 coordinate system determines the orientation of the resulting | |
133 eye. All eyes will have a parent named "eyes" just as all joints | |
134 have a parent named "joints". The resulting camera will be a | |
135 ChaseCamera or a CameraNode bound to the geo that is closest to | |
136 the eye marker. The eye marker will contain the metadata for the | |
137 eye, and will be moved by it's bound geometry. The dimensions of | |
138 the eye's camera are equal to the dimensions of the eye's "UV" | |
139 map. | |
140 | |
141 (vision creature) will take an optional :skip argument which will | |
142 inform the continuations in scene processor to skip the given | |
143 number of cycles 0 means that no cycles will be skipped. | |
144 | |
145 (vision creature) will return [init-functions sensor-functions]. | |
146 The init-functions are each single-arg functions that take the | |
147 world and register the cameras and must each be called before the | |
148 corresponding sensor-functions. Each init-function returns the | |
149 viewport for that eye which can be manipulated, saved, etc. Each | |
150 sensor-function is a thunk and will return data in the same | |
151 format as the tactile-sensor functions the structure is | |
152 [topology, sensor-data]. Internally, these sensor-functions | |
153 maintain a reference to sensor-data which is periodically updated | |
154 by the continuation function established by its init-function. | |
155 They can be queried every cycle, but their information may not | |
156 necessairly be different every cycle. | |
157 | |
158 | |
159 #+begin_src clojure | |
138 (defn add-camera! | 160 (defn add-camera! |
139 "Add a camera to the world, calling continuation on every frame | 161 "Add a camera to the world, calling continuation on every frame |
140 produced." | 162 produced." |
141 [#^Application world camera continuation] | 163 [#^Application world camera continuation] |
142 (let [width (.getWidth camera) | 164 (let [width (.getWidth camera) |
324 (add-camera! world (.getCamera world) no-op))) | 346 (add-camera! world (.getCamera world) no-op))) |
325 (fn [world tpf] | 347 (fn [world tpf] |
326 (.rotate candy (* tpf 0.2) 0 0))))) | 348 (.rotate candy (* tpf 0.2) 0 0))))) |
327 #+end_src | 349 #+end_src |
328 | 350 |
329 #+results: test-vision | 351 #+name: vision-header |
330 : #'cortex.test.vision/test-two-eyes | 352 #+begin_src clojure |
353 (ns cortex.vision | |
354 "Simulate the sense of vision in jMonkeyEngine3. Enables multiple | |
355 eyes from different positions to observe the same world, and pass | |
356 the observed data to any arbitray function. Automatically reads | |
357 eye-nodes from specially prepared blender files and instanttiates | |
358 them in the world as actual eyes." | |
359 {:author "Robert McIntyre"} | |
360 (:use (cortex world sense util)) | |
361 (:use clojure.contrib.def) | |
362 (:import com.jme3.post.SceneProcessor) | |
363 (:import (com.jme3.util BufferUtils Screenshots)) | |
364 (:import java.nio.ByteBuffer) | |
365 (:import java.awt.image.BufferedImage) | |
366 (:import (com.jme3.renderer ViewPort Camera)) | |
367 (:import com.jme3.math.ColorRGBA) | |
368 (:import com.jme3.renderer.Renderer) | |
369 (:import com.jme3.app.Application) | |
370 (:import com.jme3.texture.FrameBuffer) | |
371 (:import (com.jme3.scene Node Spatial))) | |
372 #+end_src | |
331 | 373 |
332 The example code will create two videos of the same rotating object | 374 The example code will create two videos of the same rotating object |
333 from different angles. It can be used both for stereoscopic vision | 375 from different angles. It can be used both for stereoscopic vision |
334 simulation or for simulating multiple creatures, each with their own | 376 simulation or for simulating multiple creatures, each with their own |
335 sense of vision. | 377 sense of vision. |