rlm@34
|
1 #+title: Simulated Sense of Sight
|
rlm@23
|
2 #+author: Robert McIntyre
|
rlm@23
|
3 #+email: rlm@mit.edu
|
rlm@38
|
4 #+description: Simulated sight for AI research using JMonkeyEngine3 and clojure
|
rlm@34
|
5 #+keywords: computer vision, jMonkeyEngine3, clojure
|
rlm@23
|
6 #+SETUPFILE: ../../aurellem/org/setup.org
|
rlm@23
|
7 #+INCLUDE: ../../aurellem/org/level-0.org
|
rlm@23
|
8 #+babel: :mkdirp yes :noweb yes :exports both
|
rlm@23
|
9
|
ocsenave@264
|
10 * JMonkeyEngine natively supports multiple views of the same world.
|
ocsenave@264
|
11
|
rlm@212
|
12 Vision is one of the most important senses for humans, so I need to
|
rlm@212
|
13 build a simulated sense of vision for my AI. I will do this with
|
rlm@306
|
14 simulated eyes. Each eye can be independently moved and should see its
|
rlm@212
|
15 own version of the world depending on where it is.
|
rlm@212
|
16
|
rlm@306
|
17 Making these simulated eyes a reality is simple because jMonkeyEngine
|
rlm@306
|
18 already contains extensive support for multiple views of the same 3D
|
rlm@218
|
19 simulated world. The reason jMonkeyEngine has this support is because
|
rlm@218
|
20 the support is necessary to create games with split-screen
|
rlm@218
|
21 views. Multiple views are also used to create efficient
|
rlm@212
|
22 pseudo-reflections by rendering the scene from a certain perspective
|
rlm@212
|
23 and then projecting it back onto a surface in the 3D world.
|
rlm@212
|
24
|
rlm@218
|
25 #+caption: jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye, which was one of the first games to use split-screen views.
|
rlm@212
|
26 [[../images/goldeneye-4-player.png]]
|
rlm@212
|
27
|
ocsenave@264
|
28 ** =ViewPorts=, =SceneProcessors=, and the =RenderManager=.
|
rlm@306
|
29 # =ViewPorts= are cameras; =RenderManger= takes snapshots each frame.
|
ocsenave@264
|
30 #* A Brief Description of jMonkeyEngine's Rendering Pipeline
|
rlm@212
|
31
|
rlm@213
|
32 jMonkeyEngine allows you to create a =ViewPort=, which represents a
|
rlm@213
|
33 view of the simulated world. You can create as many of these as you
|
rlm@213
|
34 want. Every frame, the =RenderManager= iterates through each
|
rlm@213
|
35 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
|
rlm@213
|
36 is a =FrameBuffer= which represents the rendered image in the GPU.
|
rlm@151
|
37
|
rlm@306
|
38 #+caption: =ViewPorts= are cameras in the world. During each frame, the =RenderManager= records a snapshot of what each view is currently seeing; these snapshots are =FrameBuffer= objects.
|
ocsenave@265
|
39 #+ATTR_HTML: width="400"
|
ocsenave@272
|
40 [[../images/diagram_rendermanager2.png]]
|
ocsenave@262
|
41
|
rlm@213
|
42 Each =ViewPort= can have any number of attached =SceneProcessor=
|
rlm@213
|
43 objects, which are called every time a new frame is rendered. A
|
rlm@306
|
44 =SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
|
rlm@219
|
45 whatever it wants to the data. Often this consists of invoking GPU
|
rlm@219
|
46 specific operations on the rendered image. The =SceneProcessor= can
|
rlm@219
|
47 also copy the GPU image data to RAM and process it with the CPU.
|
rlm@151
|
48
|
ocsenave@264
|
49 ** From Views to Vision
|
ocsenave@264
|
50 # Appropriating Views for Vision.
|
rlm@151
|
51
|
ocsenave@264
|
52 Each eye in the simulated creature needs its own =ViewPort= so that
|
rlm@213
|
53 it can see the world from its own perspective. To this =ViewPort=, I
|
rlm@306
|
54 add a =SceneProcessor= that feeds the visual data to any arbitrary
|
rlm@213
|
55 continuation function for further processing. That continuation
|
rlm@213
|
56 function may perform both CPU and GPU operations on the data. To make
|
rlm@213
|
57 this easy for the continuation function, the =SceneProcessor=
|
rlm@306
|
58 maintains appropriately sized buffers in RAM to hold the data. It does
|
rlm@218
|
59 not do any copying from the GPU to the CPU itself because it is a slow
|
rlm@218
|
60 operation.
|
rlm@214
|
61
|
rlm@213
|
62 #+name: pipeline-1
|
rlm@213
|
63 #+begin_src clojure
|
rlm@113
|
64 (defn vision-pipeline
|
rlm@34
|
65 "Create a SceneProcessor object which wraps a vision processing
|
rlm@113
|
66 continuation function. The continuation is a function that takes
|
rlm@113
|
67 [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
|
rlm@306
|
68 each of which has already been appropriately sized."
|
rlm@23
|
69 [continuation]
|
rlm@23
|
70 (let [byte-buffer (atom nil)
|
rlm@113
|
71 renderer (atom nil)
|
rlm@113
|
72 image (atom nil)]
|
rlm@23
|
73 (proxy [SceneProcessor] []
|
rlm@23
|
74 (initialize
|
rlm@23
|
75 [renderManager viewPort]
|
rlm@23
|
76 (let [cam (.getCamera viewPort)
|
rlm@23
|
77 width (.getWidth cam)
|
rlm@23
|
78 height (.getHeight cam)]
|
rlm@23
|
79 (reset! renderer (.getRenderer renderManager))
|
rlm@23
|
80 (reset! byte-buffer
|
rlm@23
|
81 (BufferUtils/createByteBuffer
|
rlm@113
|
82 (* width height 4)))
|
rlm@113
|
83 (reset! image (BufferedImage.
|
rlm@113
|
84 width height
|
rlm@113
|
85 BufferedImage/TYPE_4BYTE_ABGR))))
|
rlm@23
|
86 (isInitialized [] (not (nil? @byte-buffer)))
|
rlm@23
|
87 (reshape [_ _ _])
|
rlm@23
|
88 (preFrame [_])
|
rlm@23
|
89 (postQueue [_])
|
rlm@23
|
90 (postFrame
|
rlm@23
|
91 [#^FrameBuffer fb]
|
rlm@23
|
92 (.clear @byte-buffer)
|
rlm@113
|
93 (continuation @renderer fb @byte-buffer @image))
|
rlm@23
|
94 (cleanup []))))
|
rlm@213
|
95 #+end_src
|
rlm@213
|
96
|
rlm@273
|
97 The continuation function given to =vision-pipeline= above will be
|
rlm@213
|
98 given a =Renderer= and three containers for image data. The
|
rlm@218
|
99 =FrameBuffer= references the GPU image data, but the pixel data can
|
rlm@218
|
100 not be used directly on the CPU. The =ByteBuffer= and =BufferedImage=
|
rlm@219
|
101 are initially "empty" but are sized to hold the data in the
|
rlm@306
|
102 =FrameBuffer=. I call transferring the GPU image data to the CPU
|
rlm@213
|
103 structures "mixing" the image data. I have provided three functions to
|
rlm@213
|
104 do this mixing.
|
rlm@213
|
105
|
rlm@213
|
106 #+name: pipeline-2
|
rlm@213
|
107 #+begin_src clojure
|
rlm@113
|
108 (defn frameBuffer->byteBuffer!
|
rlm@113
|
109 "Transfer the data in the graphics card (Renderer, FrameBuffer) to
|
rlm@113
|
110 the CPU (ByteBuffer)."
|
rlm@113
|
111 [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb]
|
rlm@113
|
112 (.readFrameBuffer r fb bb) bb)
|
rlm@113
|
113
|
rlm@113
|
114 (defn byteBuffer->bufferedImage!
|
rlm@113
|
115 "Convert the C-style BGRA image data in the ByteBuffer bb to the AWT
|
rlm@113
|
116 style ABGR image data and place it in BufferedImage bi."
|
rlm@113
|
117 [#^ByteBuffer bb #^BufferedImage bi]
|
rlm@113
|
118 (Screenshots/convertScreenShot bb bi) bi)
|
rlm@113
|
119
|
rlm@113
|
120 (defn BufferedImage!
|
rlm@113
|
121 "Continuation which will grab the buffered image from the materials
|
rlm@113
|
122 provided by (vision-pipeline)."
|
rlm@113
|
123 [#^Renderer r #^FrameBuffer fb #^ByteBuffer bb #^BufferedImage bi]
|
rlm@113
|
124 (byteBuffer->bufferedImage!
|
rlm@113
|
125 (frameBuffer->byteBuffer! r fb bb) bi))
|
rlm@213
|
126 #+end_src
|
rlm@112
|
127
|
rlm@213
|
128 Note that it is possible to write vision processing algorithms
|
rlm@213
|
129 entirely in terms of =BufferedImage= inputs. Just compose that
|
rlm@273
|
130 =BufferedImage= algorithm with =BufferedImage!=. However, a vision
|
rlm@213
|
131 processing algorithm that is entirely hosted on the GPU does not have
|
rlm@306
|
132 to pay for this convenience.
|
rlm@213
|
133
|
ocsenave@265
|
134 * Optical sensor arrays are described with images and referenced with metadata
|
rlm@214
|
135 The vision pipeline described above handles the flow of rendered
|
rlm@214
|
136 images. Now, we need simulated eyes to serve as the source of these
|
rlm@214
|
137 images.
|
rlm@214
|
138
|
rlm@214
|
139 An eye is described in blender in the same way as a joint. They are
|
rlm@214
|
140 zero dimensional empty objects with no geometry whose local coordinate
|
rlm@214
|
141 system determines the orientation of the resulting eye. All eyes are
|
rlm@306
|
142 children of a parent node named "eyes" just as all joints have a
|
rlm@214
|
143 parent named "joints". An eye binds to the nearest physical object
|
rlm@273
|
144 with =bind-sense=.
|
rlm@214
|
145
|
rlm@214
|
146 #+name: add-eye
|
rlm@214
|
147 #+begin_src clojure
|
rlm@215
|
148 (in-ns 'cortex.vision)
|
rlm@215
|
149
|
rlm@214
|
150 (defn add-eye!
|
rlm@214
|
151 "Create a Camera centered on the current position of 'eye which
|
rlm@338
|
152 follows the closest physical node in 'creature. The camera will
|
rlm@338
|
153 point in the X direction and use the Z vector as up as determined
|
rlm@338
|
154 by the rotation of these vectors in blender coordinate space. Use
|
rlm@338
|
155 XZY rotation for the node in blender."
|
rlm@214
|
156 [#^Node creature #^Spatial eye]
|
rlm@214
|
157 (let [target (closest-node creature eye)
|
rlm@338
|
158 [cam-width cam-height]
|
rlm@338
|
159 ;;[640 480] ;; graphics card on laptop doesn't support
|
rlm@338
|
160 ;; arbitray dimensions.
|
rlm@338
|
161 (eye-dimensions eye)
|
rlm@215
|
162 cam (Camera. cam-width cam-height)
|
rlm@215
|
163 rot (.getWorldRotation eye)]
|
rlm@214
|
164 (.setLocation cam (.getWorldTranslation eye))
|
rlm@218
|
165 (.lookAtDirection
|
rlm@338
|
166 cam ; this part is not a mistake and
|
rlm@338
|
167 (.mult rot Vector3f/UNIT_X) ; is consistent with using Z in
|
rlm@338
|
168 (.mult rot Vector3f/UNIT_Y)) ; blender as the UP vector.
|
rlm@214
|
169 (.setFrustumPerspective
|
rlm@338
|
170 cam (float 45)
|
rlm@338
|
171 (float (/ (.getWidth cam) (.getHeight cam)))
|
rlm@338
|
172 (float 1)
|
rlm@338
|
173 (float 1000))
|
rlm@215
|
174 (bind-sense target cam) cam))
|
rlm@214
|
175 #+end_src
|
rlm@214
|
176
|
rlm@338
|
177 #+results: add-eye
|
rlm@338
|
178 : #'cortex.vision/add-eye!
|
rlm@338
|
179
|
rlm@214
|
180 Here, the camera is created based on metadata on the eye-node and
|
rlm@273
|
181 attached to the nearest physical object with =bind-sense=
|
rlm@214
|
182 ** The Retina
|
rlm@214
|
183
|
rlm@214
|
184 An eye is a surface (the retina) which contains many discrete sensors
|
rlm@218
|
185 to detect light. These sensors have can have different light-sensing
|
rlm@214
|
186 properties. In humans, each discrete sensor is sensitive to red,
|
rlm@214
|
187 blue, green, or gray. These different types of sensors can have
|
rlm@214
|
188 different spatial distributions along the retina. In humans, there is
|
rlm@214
|
189 a fovea in the center of the retina which has a very high density of
|
rlm@214
|
190 color sensors, and a blind spot which has no sensors at all. Sensor
|
rlm@219
|
191 density decreases in proportion to distance from the fovea.
|
rlm@214
|
192
|
rlm@214
|
193 I want to be able to model any retinal configuration, so my eye-nodes
|
rlm@214
|
194 in blender contain metadata pointing to images that describe the
|
rlm@306
|
195 precise position of the individual sensors using white pixels. The
|
rlm@306
|
196 meta-data also describes the precise sensitivity to light that the
|
rlm@214
|
197 sensors described in the image have. An eye can contain any number of
|
rlm@214
|
198 these images. For example, the metadata for an eye might look like
|
rlm@214
|
199 this:
|
rlm@214
|
200
|
rlm@214
|
201 #+begin_src clojure
|
rlm@214
|
202 {0xFF0000 "Models/test-creature/retina-small.png"}
|
rlm@214
|
203 #+end_src
|
rlm@214
|
204
|
rlm@214
|
205 #+caption: The retinal profile image "Models/test-creature/retina-small.png". White pixels are photo-sensitive elements. The distribution of white pixels is denser in the middle and falls off at the edges and is inspired by the human retina.
|
rlm@214
|
206 [[../assets/Models/test-creature/retina-small.png]]
|
rlm@214
|
207
|
rlm@214
|
208 Together, the number 0xFF0000 and the image image above describe the
|
rlm@214
|
209 placement of red-sensitive sensory elements.
|
rlm@214
|
210
|
rlm@214
|
211 Meta-data to very crudely approximate a human eye might be something
|
rlm@214
|
212 like this:
|
rlm@214
|
213
|
rlm@214
|
214 #+begin_src clojure
|
rlm@214
|
215 (let [retinal-profile "Models/test-creature/retina-small.png"]
|
rlm@214
|
216 {0xFF0000 retinal-profile
|
rlm@214
|
217 0x00FF00 retinal-profile
|
rlm@214
|
218 0x0000FF retinal-profile
|
rlm@214
|
219 0xFFFFFF retinal-profile})
|
rlm@214
|
220 #+end_src
|
rlm@214
|
221
|
rlm@214
|
222 The numbers that serve as keys in the map determine a sensor's
|
rlm@214
|
223 relative sensitivity to the channels red, green, and blue. These
|
rlm@218
|
224 sensitivity values are packed into an integer in the order =|_|R|G|B|=
|
rlm@218
|
225 in 8-bit fields. The RGB values of a pixel in the image are added
|
rlm@306
|
226 together with these sensitivities as linear weights. Therefore,
|
rlm@214
|
227 0xFF0000 means sensitive to red only while 0xFFFFFF means sensitive to
|
rlm@214
|
228 all colors equally (gray).
|
rlm@214
|
229
|
rlm@306
|
230 For convenience I've defined a few symbols for the more common
|
rlm@214
|
231 sensitivity values.
|
rlm@214
|
232
|
rlm@214
|
233 #+name: sensitivity
|
rlm@214
|
234 #+begin_src clojure
|
rlm@317
|
235 (def sensitivity-presets
|
rlm@317
|
236 "Retinal sensitivity presets for sensors that extract one channel
|
rlm@317
|
237 (:red :blue :green) or average all channels (:all)"
|
rlm@214
|
238 {:all 0xFFFFFF
|
rlm@214
|
239 :red 0xFF0000
|
rlm@214
|
240 :blue 0x0000FF
|
rlm@317
|
241 :green 0x00FF00})
|
rlm@214
|
242 #+end_src
|
rlm@214
|
243
|
rlm@214
|
244 ** Metadata Processing
|
rlm@214
|
245
|
rlm@273
|
246 =retina-sensor-profile= extracts a map from the eye-node in the same
|
rlm@273
|
247 format as the example maps above. =eye-dimensions= finds the
|
rlm@219
|
248 dimensions of the smallest image required to contain all the retinal
|
rlm@214
|
249 sensor maps.
|
rlm@214
|
250
|
rlm@216
|
251 #+name: retina
|
rlm@214
|
252 #+begin_src clojure
|
rlm@214
|
253 (defn retina-sensor-profile
|
rlm@214
|
254 "Return a map of pixel sensitivity numbers to BufferedImages
|
rlm@214
|
255 describing the distribution of light-sensitive components of this
|
rlm@214
|
256 eye. :red, :green, :blue, :gray are already defined as extracting
|
rlm@214
|
257 the red, green, blue, and average components respectively."
|
rlm@214
|
258 [#^Spatial eye]
|
rlm@214
|
259 (if-let [eye-map (meta-data eye "eye")]
|
rlm@214
|
260 (map-vals
|
rlm@214
|
261 load-image
|
rlm@214
|
262 (eval (read-string eye-map)))))
|
rlm@214
|
263
|
rlm@218
|
264 (defn eye-dimensions
|
rlm@218
|
265 "Returns [width, height] determined by the metadata of the eye."
|
rlm@214
|
266 [#^Spatial eye]
|
rlm@214
|
267 (let [dimensions
|
rlm@214
|
268 (map #(vector (.getWidth %) (.getHeight %))
|
rlm@214
|
269 (vals (retina-sensor-profile eye)))]
|
rlm@214
|
270 [(apply max (map first dimensions))
|
rlm@214
|
271 (apply max (map second dimensions))]))
|
rlm@214
|
272 #+end_src
|
rlm@214
|
273
|
ocsenave@265
|
274 * Importing and parsing descriptions of eyes.
|
rlm@214
|
275 First off, get the children of the "eyes" empty node to find all the
|
rlm@214
|
276 eyes the creature has.
|
rlm@216
|
277 #+name: eye-node
|
rlm@214
|
278 #+begin_src clojure
|
rlm@317
|
279 (def
|
rlm@317
|
280 ^{:doc "Return the children of the creature's \"eyes\" node."
|
rlm@317
|
281 :arglists '([creature])}
|
rlm@214
|
282 eyes
|
rlm@317
|
283 (sense-nodes "eyes"))
|
rlm@214
|
284 #+end_src
|
rlm@214
|
285
|
rlm@273
|
286 Then, add the camera created by =add-eye!= to the simulation by
|
rlm@215
|
287 creating a new viewport.
|
rlm@214
|
288
|
rlm@216
|
289 #+name: add-camera
|
rlm@213
|
290 #+begin_src clojure
|
rlm@338
|
291 (in-ns 'cortex.vision)
|
rlm@169
|
292 (defn add-camera!
|
rlm@169
|
293 "Add a camera to the world, calling continuation on every frame
|
rlm@34
|
294 produced."
|
rlm@167
|
295 [#^Application world camera continuation]
|
rlm@23
|
296 (let [width (.getWidth camera)
|
rlm@23
|
297 height (.getHeight camera)
|
rlm@23
|
298 render-manager (.getRenderManager world)
|
rlm@23
|
299 viewport (.createMainView render-manager "eye-view" camera)]
|
rlm@23
|
300 (doto viewport
|
rlm@23
|
301 (.setClearFlags true true true)
|
rlm@112
|
302 (.setBackgroundColor ColorRGBA/Black)
|
rlm@113
|
303 (.addProcessor (vision-pipeline continuation))
|
rlm@23
|
304 (.attachScene (.getRootNode world)))))
|
rlm@215
|
305 #+end_src
|
rlm@151
|
306
|
rlm@338
|
307 #+results: add-camera
|
rlm@338
|
308 : #'cortex.vision/add-camera!
|
rlm@338
|
309
|
rlm@151
|
310
|
rlm@218
|
311 The eye's continuation function should register the viewport with the
|
rlm@218
|
312 simulation the first time it is called, use the CPU to extract the
|
rlm@215
|
313 appropriate pixels from the rendered image and weight them by each
|
rlm@218
|
314 sensor's sensitivity. I have the option to do this processing in
|
rlm@218
|
315 native code for a slight gain in speed. I could also do it in the GPU
|
rlm@273
|
316 for a massive gain in speed. =vision-kernel= generates a list of
|
rlm@218
|
317 such continuation functions, one for each channel of the eye.
|
rlm@151
|
318
|
rlm@216
|
319 #+name: kernel
|
rlm@215
|
320 #+begin_src clojure
|
rlm@215
|
321 (in-ns 'cortex.vision)
|
rlm@151
|
322
|
rlm@215
|
323 (defrecord attached-viewport [vision-fn viewport-fn]
|
rlm@215
|
324 clojure.lang.IFn
|
rlm@215
|
325 (invoke [this world] (vision-fn world))
|
rlm@215
|
326 (applyTo [this args] (apply vision-fn args)))
|
rlm@151
|
327
|
rlm@216
|
328 (defn pixel-sense [sensitivity pixel]
|
rlm@216
|
329 (let [s-r (bit-shift-right (bit-and 0xFF0000 sensitivity) 16)
|
rlm@216
|
330 s-g (bit-shift-right (bit-and 0x00FF00 sensitivity) 8)
|
rlm@216
|
331 s-b (bit-and 0x0000FF sensitivity)
|
rlm@216
|
332
|
rlm@216
|
333 p-r (bit-shift-right (bit-and 0xFF0000 pixel) 16)
|
rlm@216
|
334 p-g (bit-shift-right (bit-and 0x00FF00 pixel) 8)
|
rlm@216
|
335 p-b (bit-and 0x0000FF pixel)
|
rlm@216
|
336
|
rlm@216
|
337 total-sensitivity (* 255 (+ s-r s-g s-b))]
|
rlm@216
|
338 (float (/ (+ (* s-r p-r)
|
rlm@216
|
339 (* s-g p-g)
|
rlm@216
|
340 (* s-b p-b))
|
rlm@216
|
341 total-sensitivity))))
|
rlm@216
|
342
|
rlm@215
|
343 (defn vision-kernel
|
rlm@171
|
344 "Returns a list of functions, each of which will return a color
|
rlm@171
|
345 channel's worth of visual information when called inside a running
|
rlm@171
|
346 simulation."
|
rlm@151
|
347 [#^Node creature #^Spatial eye & {skip :skip :or {skip 0}}]
|
rlm@169
|
348 (let [retinal-map (retina-sensor-profile eye)
|
rlm@169
|
349 camera (add-eye! creature eye)
|
rlm@151
|
350 vision-image
|
rlm@151
|
351 (atom
|
rlm@151
|
352 (BufferedImage. (.getWidth camera)
|
rlm@151
|
353 (.getHeight camera)
|
rlm@170
|
354 BufferedImage/TYPE_BYTE_BINARY))
|
rlm@170
|
355 register-eye!
|
rlm@170
|
356 (runonce
|
rlm@170
|
357 (fn [world]
|
rlm@170
|
358 (add-camera!
|
rlm@170
|
359 world camera
|
rlm@170
|
360 (let [counter (atom 0)]
|
rlm@170
|
361 (fn [r fb bb bi]
|
rlm@170
|
362 (if (zero? (rem (swap! counter inc) (inc skip)))
|
rlm@170
|
363 (reset! vision-image
|
rlm@170
|
364 (BufferedImage! r fb bb bi))))))))]
|
rlm@151
|
365 (vec
|
rlm@151
|
366 (map
|
rlm@151
|
367 (fn [[key image]]
|
rlm@151
|
368 (let [whites (white-coordinates image)
|
rlm@151
|
369 topology (vec (collapse whites))
|
rlm@216
|
370 sensitivity (sensitivity-presets key key)]
|
rlm@215
|
371 (attached-viewport.
|
rlm@215
|
372 (fn [world]
|
rlm@215
|
373 (register-eye! world)
|
rlm@215
|
374 (vector
|
rlm@215
|
375 topology
|
rlm@215
|
376 (vec
|
rlm@215
|
377 (for [[x y] whites]
|
rlm@216
|
378 (pixel-sense
|
rlm@216
|
379 sensitivity
|
rlm@216
|
380 (.getRGB @vision-image x y))))))
|
rlm@215
|
381 register-eye!)))
|
rlm@215
|
382 retinal-map))))
|
rlm@151
|
383
|
rlm@215
|
384 (defn gen-fix-display
|
rlm@215
|
385 "Create a function to call to restore a simulation's display when it
|
rlm@215
|
386 is disrupted by a Viewport."
|
rlm@215
|
387 []
|
rlm@215
|
388 (runonce
|
rlm@215
|
389 (fn [world]
|
rlm@215
|
390 (add-camera! world (.getCamera world) no-op))))
|
rlm@215
|
391 #+end_src
|
rlm@170
|
392
|
rlm@273
|
393 Note that since each of the functions generated by =vision-kernel=
|
rlm@273
|
394 shares the same =register-eye!= function, the eye will be registered
|
rlm@215
|
395 only once the first time any of the functions from the list returned
|
rlm@273
|
396 by =vision-kernel= is called. Each of the functions returned by
|
rlm@273
|
397 =vision-kernel= also allows access to the =Viewport= through which
|
rlm@306
|
398 it receives images.
|
rlm@215
|
399
|
rlm@306
|
400 The in-game display can be disrupted by all the ViewPorts that the
|
rlm@306
|
401 functions generated by =vision-kernel= add. This doesn't affect the
|
rlm@215
|
402 simulation or the simulated senses, but can be annoying.
|
rlm@273
|
403 =gen-fix-display= restores the in-simulation display.
|
rlm@215
|
404
|
ocsenave@265
|
405 ** The =vision!= function creates sensory probes.
|
rlm@215
|
406
|
rlm@218
|
407 All the hard work has been done; all that remains is to apply
|
rlm@273
|
408 =vision-kernel= to each eye in the creature and gather the results
|
rlm@215
|
409 into one list of functions.
|
rlm@215
|
410
|
rlm@216
|
411 #+name: main
|
rlm@215
|
412 #+begin_src clojure
|
rlm@170
|
413 (defn vision!
|
rlm@170
|
414 "Returns a function which returns visual sensory data when called
|
rlm@218
|
415 inside a running simulation."
|
rlm@151
|
416 [#^Node creature & {skip :skip :or {skip 0}}]
|
rlm@151
|
417 (reduce
|
rlm@170
|
418 concat
|
rlm@167
|
419 (for [eye (eyes creature)]
|
rlm@215
|
420 (vision-kernel creature eye))))
|
rlm@215
|
421 #+end_src
|
rlm@151
|
422
|
ocsenave@265
|
423 ** Displaying visual data for debugging.
|
ocsenave@265
|
424 # Visualization of Vision. Maybe less alliteration would be better.
|
rlm@215
|
425 It's vital to have a visual representation for each sense. Here I use
|
rlm@273
|
426 =view-sense= to construct a function that will create a display for
|
rlm@215
|
427 visual data.
|
rlm@215
|
428
|
rlm@216
|
429 #+name: display
|
rlm@215
|
430 #+begin_src clojure
|
rlm@216
|
431 (in-ns 'cortex.vision)
|
rlm@216
|
432
|
rlm@189
|
433 (defn view-vision
|
rlm@189
|
434 "Creates a function which accepts a list of visual sensor-data and
|
rlm@189
|
435 displays each element of the list to the screen."
|
rlm@189
|
436 []
|
rlm@188
|
437 (view-sense
|
rlm@188
|
438 (fn
|
rlm@188
|
439 [[coords sensor-data]]
|
rlm@188
|
440 (let [image (points->image coords)]
|
rlm@188
|
441 (dorun
|
rlm@188
|
442 (for [i (range (count coords))]
|
rlm@188
|
443 (.setRGB image ((coords i) 0) ((coords i) 1)
|
rlm@216
|
444 (gray (int (* 255 (sensor-data i)))))))
|
rlm@189
|
445 image))))
|
rlm@34
|
446 #+end_src
|
rlm@23
|
447
|
ocsenave@264
|
448 * Demonstrations
|
ocsenave@264
|
449 ** Demonstrating the vision pipeline.
|
rlm@23
|
450
|
rlm@215
|
451 This is a basic test for the vision system. It only tests the
|
ocsenave@264
|
452 vision-pipeline and does not deal with loading eyes from a blender
|
rlm@215
|
453 file. The code creates two videos of the same rotating cube from
|
rlm@215
|
454 different angles.
|
rlm@23
|
455
|
rlm@215
|
456 #+name: test-1
|
rlm@23
|
457 #+begin_src clojure
|
rlm@215
|
458 (in-ns 'cortex.test.vision)
|
rlm@23
|
459
|
rlm@219
|
460 (defn test-pipeline
|
rlm@69
|
461 "Testing vision:
|
rlm@69
|
462 Tests the vision system by creating two views of the same rotating
|
rlm@69
|
463 object from different angles and displaying both of those views in
|
rlm@69
|
464 JFrames.
|
rlm@69
|
465
|
rlm@69
|
466 You should see a rotating cube, and two windows,
|
rlm@69
|
467 each displaying a different view of the cube."
|
rlm@283
|
468 ([] (test-pipeline false))
|
rlm@283
|
469 ([record?]
|
rlm@283
|
470 (let [candy
|
rlm@283
|
471 (box 1 1 1 :physical? false :color ColorRGBA/Blue)]
|
rlm@283
|
472 (world
|
rlm@283
|
473 (doto (Node.)
|
rlm@283
|
474 (.attachChild candy))
|
rlm@283
|
475 {}
|
rlm@283
|
476 (fn [world]
|
rlm@283
|
477 (let [cam (.clone (.getCamera world))
|
rlm@283
|
478 width (.getWidth cam)
|
rlm@283
|
479 height (.getHeight cam)]
|
rlm@283
|
480 (add-camera! world cam
|
rlm@283
|
481 (comp
|
rlm@283
|
482 (view-image
|
rlm@283
|
483 (if record?
|
rlm@283
|
484 (File. "/home/r/proj/cortex/render/vision/1")))
|
rlm@283
|
485 BufferedImage!))
|
rlm@283
|
486 (add-camera! world
|
rlm@283
|
487 (doto (.clone cam)
|
rlm@283
|
488 (.setLocation (Vector3f. -10 0 0))
|
rlm@283
|
489 (.lookAt Vector3f/ZERO Vector3f/UNIT_Y))
|
rlm@283
|
490 (comp
|
rlm@283
|
491 (view-image
|
rlm@283
|
492 (if record?
|
rlm@283
|
493 (File. "/home/r/proj/cortex/render/vision/2")))
|
rlm@283
|
494 BufferedImage!))
|
rlm@341
|
495 (let [timer (IsoTimer. 60)]
|
rlm@340
|
496 (.setTimer world timer)
|
rlm@340
|
497 (display-dilated-time world timer))
|
rlm@283
|
498 ;; This is here to restore the main view
|
rlm@340
|
499 ;; after the other views have completed processing
|
rlm@283
|
500 (add-camera! world (.getCamera world) no-op)))
|
rlm@283
|
501 (fn [world tpf]
|
rlm@283
|
502 (.rotate candy (* tpf 0.2) 0 0))))))
|
rlm@23
|
503 #+end_src
|
rlm@23
|
504
|
rlm@340
|
505 #+results: test-1
|
rlm@340
|
506 : #'cortex.test.vision/test-pipeline
|
rlm@340
|
507
|
rlm@215
|
508 #+begin_html
|
rlm@215
|
509 <div class="figure">
|
rlm@215
|
510 <video controls="controls" width="755">
|
rlm@215
|
511 <source src="../video/spinning-cube.ogg" type="video/ogg"
|
rlm@215
|
512 preload="none" poster="../images/aurellem-1280x480.png" />
|
rlm@215
|
513 </video>
|
rlm@309
|
514 <br> <a href="http://youtu.be/r5Bn2aG7MO0"> YouTube </a>
|
rlm@215
|
515 <p>A rotating cube viewed from two different perspectives.</p>
|
rlm@215
|
516 </div>
|
rlm@215
|
517 #+end_html
|
rlm@215
|
518
|
rlm@215
|
519 Creating multiple eyes like this can be used for stereoscopic vision
|
rlm@215
|
520 simulation in a single creature or for simulating multiple creatures,
|
rlm@215
|
521 each with their own sense of vision.
|
ocsenave@264
|
522 ** Demonstrating eye import and parsing.
|
rlm@215
|
523
|
rlm@218
|
524 To the worm from the last post, I add a new node that describes its
|
rlm@215
|
525 eyes.
|
rlm@215
|
526
|
rlm@215
|
527 #+attr_html: width=755
|
rlm@215
|
528 #+caption: The worm with newly added empty nodes describing a single eye.
|
rlm@215
|
529 [[../images/worm-with-eye.png]]
|
rlm@215
|
530
|
rlm@215
|
531 The node highlighted in yellow is the root level "eyes" node. It has
|
rlm@218
|
532 a single child, highlighted in orange, which describes a single
|
rlm@218
|
533 eye. This is the "eye" node. It is placed so that the worm will have
|
rlm@218
|
534 an eye located in the center of the flat portion of its lower
|
rlm@218
|
535 hemispherical section.
|
rlm@218
|
536
|
rlm@218
|
537 The two nodes which are not highlighted describe the single joint of
|
rlm@218
|
538 the worm.
|
rlm@215
|
539
|
rlm@215
|
540 The metadata of the eye-node is:
|
rlm@215
|
541
|
rlm@215
|
542 #+begin_src clojure :results verbatim :exports both
|
rlm@215
|
543 (cortex.sense/meta-data
|
rlm@218
|
544 (.getChild (.getChild (cortex.test.body/worm) "eyes") "eye") "eye")
|
rlm@215
|
545 #+end_src
|
rlm@215
|
546
|
rlm@215
|
547 #+results:
|
rlm@215
|
548 : "(let [retina \"Models/test-creature/retina-small.png\"]
|
rlm@215
|
549 : {:all retina :red retina :green retina :blue retina})"
|
rlm@215
|
550
|
rlm@215
|
551 This is the approximation to the human eye described earlier.
|
rlm@215
|
552
|
rlm@216
|
553 #+name: test-2
|
rlm@215
|
554 #+begin_src clojure
|
rlm@215
|
555 (in-ns 'cortex.test.vision)
|
rlm@215
|
556
|
rlm@216
|
557 (defn change-color [obj color]
|
rlm@321
|
558 ;;(println-repl obj)
|
rlm@216
|
559 (if obj
|
rlm@216
|
560 (.setColor (.getMaterial obj) "Color" color)))
|
rlm@216
|
561
|
rlm@216
|
562 (defn colored-cannon-ball [color]
|
rlm@216
|
563 (comp #(change-color % color)
|
rlm@216
|
564 (fire-cannon-ball)))
|
rlm@215
|
565
|
rlm@338
|
566 (defn gen-worm
|
rlm@338
|
567 "create a creature acceptable for testing as a replacement for the
|
rlm@338
|
568 worm."
|
rlm@338
|
569 []
|
rlm@338
|
570 (nodify
|
rlm@338
|
571 "worm"
|
rlm@338
|
572 [(nodify
|
rlm@338
|
573 "eyes"
|
rlm@338
|
574 [(doto
|
rlm@338
|
575 (Node. "eye1")
|
rlm@338
|
576 (.setLocalTranslation (Vector3f. 0 -1.1 0))
|
rlm@338
|
577 (.setUserData
|
rlm@338
|
578
|
rlm@338
|
579 "eye"
|
rlm@338
|
580 "(let [retina
|
rlm@338
|
581 \"Models/test-creature/retina-small.png\"]
|
rlm@338
|
582 {:all retina :red retina
|
rlm@338
|
583 :green retina :blue retina})"))])
|
rlm@338
|
584 (box
|
rlm@338
|
585 0.2 0.2 0.2
|
rlm@338
|
586 :name "worm-segment"
|
rlm@338
|
587 :position (Vector3f. 0 0 0)
|
rlm@338
|
588 :color ColorRGBA/Orange)]))
|
rlm@338
|
589
|
rlm@338
|
590
|
rlm@338
|
591
|
rlm@283
|
592 (defn test-worm-vision
|
rlm@321
|
593 "Testing vision:
|
rlm@321
|
594 You should see the worm suspended in mid-air, looking down at a
|
rlm@321
|
595 table. There are four small displays, one each for red, green blue,
|
rlm@321
|
596 and gray channels. You can fire balls of various colors, and the
|
rlm@321
|
597 four channels should react accordingly.
|
rlm@321
|
598
|
rlm@321
|
599 Keys:
|
rlm@321
|
600 r : fire red-ball
|
rlm@321
|
601 b : fire blue-ball
|
rlm@321
|
602 g : fire green-ball
|
rlm@321
|
603 <space> : fire white ball"
|
rlm@338
|
604
|
rlm@283
|
605 ([] (test-worm-vision false))
|
rlm@283
|
606 ([record?]
|
rlm@283
|
607 (let [the-worm (doto (worm)(body!))
|
rlm@340
|
608 vision (vision! the-worm)
|
rlm@340
|
609 vision-display (view-vision)
|
rlm@340
|
610 fix-display (gen-fix-display)
|
rlm@283
|
611 me (sphere 0.5 :color ColorRGBA/Blue :physical? false)
|
rlm@283
|
612 x-axis
|
rlm@283
|
613 (box 1 0.01 0.01 :physical? false :color ColorRGBA/Red
|
rlm@283
|
614 :position (Vector3f. 0 -5 0))
|
rlm@283
|
615 y-axis
|
rlm@283
|
616 (box 0.01 1 0.01 :physical? false :color ColorRGBA/Green
|
rlm@283
|
617 :position (Vector3f. 0 -5 0))
|
rlm@283
|
618 z-axis
|
rlm@283
|
619 (box 0.01 0.01 1 :physical? false :color ColorRGBA/Blue
|
rlm@283
|
620 :position (Vector3f. 0 -5 0))
|
rlm@340
|
621
|
rlm@338
|
622 ]
|
rlm@215
|
623
|
rlm@335
|
624 (world
|
rlm@335
|
625 (nodify [(floor) the-worm x-axis y-axis z-axis me])
|
rlm@340
|
626 (merge standard-debug-controls
|
rlm@340
|
627 {"key-r" (colored-cannon-ball ColorRGBA/Red)
|
rlm@340
|
628 "key-b" (colored-cannon-ball ColorRGBA/Blue)
|
rlm@340
|
629 "key-g" (colored-cannon-ball ColorRGBA/Green)})
|
rlm@338
|
630
|
rlm@335
|
631 (fn [world]
|
rlm@340
|
632 (light-up-everything world)
|
rlm@340
|
633 (speed-up world)
|
rlm@341
|
634 (let [timer (IsoTimer. 60)]
|
rlm@340
|
635 (.setTimer world timer)
|
rlm@340
|
636 (display-dilated-time world timer))
|
rlm@340
|
637 ;; add a view from the worm's perspective
|
rlm@340
|
638 (if record?
|
rlm@340
|
639 (Capture/captureVideo
|
rlm@340
|
640 world
|
rlm@340
|
641 (File.
|
rlm@340
|
642 "/home/r/proj/cortex/render/worm-vision/main-view")))
|
rlm@340
|
643
|
rlm@340
|
644 (add-camera!
|
rlm@340
|
645 world
|
rlm@340
|
646 (add-eye! the-worm (first (eyes the-worm)))
|
rlm@340
|
647 (comp
|
rlm@340
|
648 (view-image
|
rlm@340
|
649 (if record?
|
rlm@340
|
650 (File.
|
rlm@340
|
651 "/home/r/proj/cortex/render/worm-vision/worm-view")))
|
rlm@340
|
652 BufferedImage!))
|
rlm@340
|
653
|
rlm@340
|
654 (set-gravity world Vector3f/ZERO)
|
rlm@340
|
655 (add-camera! world (.getCamera world) no-op))
|
rlm@340
|
656
|
rlm@340
|
657 (fn [world _]
|
rlm@340
|
658 (.setLocalTranslation me (.getLocation (.getCamera world)))
|
rlm@340
|
659 (vision-display
|
rlm@340
|
660 (map #(% world) vision)
|
rlm@338
|
661 (if record?
|
rlm@340
|
662 (File. "/home/r/proj/cortex/render/worm-vision")))
|
rlm@340
|
663 (fix-display world)
|
rlm@335
|
664 )))))
|
rlm@215
|
665 #+end_src
|
rlm@215
|
666
|
rlm@335
|
667 #+RESULTS: test-2
|
rlm@337
|
668 : #'cortex.test.vision/test-worm-vision
|
rlm@335
|
669
|
rlm@335
|
670
|
rlm@218
|
671 The world consists of the worm and a flat gray floor. I can shoot red,
|
rlm@218
|
672 green, blue and white cannonballs at the worm. The worm is initially
|
rlm@218
|
673 looking down at the floor, and there is no gravity. My perspective
|
rlm@218
|
674 (the Main View), the worm's perspective (Worm View) and the 4 sensor
|
rlm@218
|
675 channels that comprise the worm's eye are all saved frame-by-frame to
|
rlm@218
|
676 disk.
|
rlm@218
|
677
|
rlm@218
|
678 * Demonstration of Vision
|
rlm@218
|
679 #+begin_html
|
rlm@218
|
680 <div class="figure">
|
rlm@218
|
681 <video controls="controls" width="755">
|
rlm@218
|
682 <source src="../video/worm-vision.ogg" type="video/ogg"
|
rlm@218
|
683 preload="none" poster="../images/aurellem-1280x480.png" />
|
rlm@218
|
684 </video>
|
rlm@309
|
685 <br> <a href="http://youtu.be/J3H3iB_2NPQ"> YouTube </a>
|
rlm@218
|
686 <p>Simulated Vision in a Virtual Environment</p>
|
rlm@218
|
687 </div>
|
rlm@218
|
688 #+end_html
|
rlm@218
|
689
|
rlm@218
|
690 ** Generate the Worm Video from Frames
|
rlm@216
|
691 #+name: magick2
|
rlm@216
|
692 #+begin_src clojure
|
rlm@216
|
693 (ns cortex.video.magick2
|
rlm@216
|
694 (:import java.io.File)
|
rlm@316
|
695 (:use clojure.java.shell))
|
rlm@216
|
696
|
rlm@216
|
697 (defn images [path]
|
rlm@216
|
698 (sort (rest (file-seq (File. path)))))
|
rlm@216
|
699
|
rlm@216
|
700 (def base "/home/r/proj/cortex/render/worm-vision/")
|
rlm@216
|
701
|
rlm@216
|
702 (defn pics [file]
|
rlm@216
|
703 (images (str base file)))
|
rlm@216
|
704
|
rlm@216
|
705 (defn combine-images []
|
rlm@216
|
706 (let [main-view (pics "main-view")
|
rlm@216
|
707 worm-view (pics "worm-view")
|
rlm@216
|
708 blue (pics "0")
|
rlm@216
|
709 green (pics "1")
|
rlm@216
|
710 red (pics "2")
|
rlm@216
|
711 gray (pics "3")
|
rlm@216
|
712 blender (let [b-pics (pics "blender")]
|
rlm@216
|
713 (concat b-pics (repeat 9001 (last b-pics))))
|
rlm@216
|
714 background (repeat 9001 (File. (str base "background.png")))
|
rlm@216
|
715 targets (map
|
rlm@216
|
716 #(File. (str base "out/" (format "%07d.png" %)))
|
rlm@216
|
717 (range 0 (count main-view)))]
|
rlm@216
|
718 (dorun
|
rlm@216
|
719 (pmap
|
rlm@216
|
720 (comp
|
rlm@216
|
721 (fn [[background main-view worm-view red green blue gray blender target]]
|
rlm@216
|
722 (println target)
|
rlm@216
|
723 (sh "convert"
|
rlm@216
|
724 background
|
rlm@216
|
725 main-view "-geometry" "+18+17" "-composite"
|
rlm@216
|
726 worm-view "-geometry" "+677+17" "-composite"
|
rlm@216
|
727 green "-geometry" "+685+430" "-composite"
|
rlm@216
|
728 red "-geometry" "+788+430" "-composite"
|
rlm@216
|
729 blue "-geometry" "+894+430" "-composite"
|
rlm@216
|
730 gray "-geometry" "+1000+430" "-composite"
|
rlm@216
|
731 blender "-geometry" "+0+0" "-composite"
|
rlm@216
|
732 target))
|
rlm@216
|
733 (fn [& args] (map #(.getCanonicalPath %) args)))
|
rlm@216
|
734 background main-view worm-view red green blue gray blender targets))))
|
rlm@216
|
735 #+end_src
|
rlm@216
|
736
|
rlm@216
|
737 #+begin_src sh :results silent
|
rlm@216
|
738 cd /home/r/proj/cortex/render/worm-vision
|
rlm@216
|
739 ffmpeg -r 25 -b 9001k -i out/%07d.png -vcodec libtheora worm-vision.ogg
|
rlm@216
|
740 #+end_src
|
rlm@236
|
741
|
ocsenave@265
|
742 * Onward!
|
ocsenave@265
|
743 - As a neat bonus, this idea behind simulated vision also enables one
|
ocsenave@265
|
744 to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]].
|
ocsenave@265
|
745 - Now that we have vision, it's time to tackle [[./hearing.org][hearing]].
|
ocsenave@265
|
746 #+appendix
|
ocsenave@265
|
747
|
rlm@215
|
748 * Headers
|
rlm@215
|
749
|
rlm@213
|
750 #+name: vision-header
|
rlm@213
|
751 #+begin_src clojure
|
rlm@213
|
752 (ns cortex.vision
|
rlm@213
|
753 "Simulate the sense of vision in jMonkeyEngine3. Enables multiple
|
rlm@213
|
754 eyes from different positions to observe the same world, and pass
|
rlm@306
|
755 the observed data to any arbitrary function. Automatically reads
|
rlm@216
|
756 eye-nodes from specially prepared blender files and instantiates
|
rlm@213
|
757 them in the world as actual eyes."
|
rlm@213
|
758 {:author "Robert McIntyre"}
|
rlm@213
|
759 (:use (cortex world sense util))
|
rlm@213
|
760 (:import com.jme3.post.SceneProcessor)
|
rlm@237
|
761 (:import (com.jme3.util BufferUtils Screenshots))
|
rlm@213
|
762 (:import java.nio.ByteBuffer)
|
rlm@213
|
763 (:import java.awt.image.BufferedImage)
|
rlm@213
|
764 (:import (com.jme3.renderer ViewPort Camera))
|
rlm@216
|
765 (:import (com.jme3.math ColorRGBA Vector3f Matrix3f))
|
rlm@213
|
766 (:import com.jme3.renderer.Renderer)
|
rlm@213
|
767 (:import com.jme3.app.Application)
|
rlm@213
|
768 (:import com.jme3.texture.FrameBuffer)
|
rlm@213
|
769 (:import (com.jme3.scene Node Spatial)))
|
rlm@213
|
770 #+end_src
|
rlm@112
|
771
|
rlm@215
|
772 #+name: test-header
|
rlm@215
|
773 #+begin_src clojure
|
rlm@215
|
774 (ns cortex.test.vision
|
rlm@215
|
775 (:use (cortex world sense util body vision))
|
rlm@215
|
776 (:use cortex.test.body)
|
rlm@215
|
777 (:import java.awt.image.BufferedImage)
|
rlm@215
|
778 (:import javax.swing.JPanel)
|
rlm@215
|
779 (:import javax.swing.SwingUtilities)
|
rlm@215
|
780 (:import java.awt.Dimension)
|
rlm@215
|
781 (:import javax.swing.JFrame)
|
rlm@215
|
782 (:import com.jme3.math.ColorRGBA)
|
rlm@215
|
783 (:import com.jme3.scene.Node)
|
rlm@215
|
784 (:import com.jme3.math.Vector3f)
|
rlm@216
|
785 (:import java.io.File)
|
rlm@341
|
786 (:import (com.aurellem.capture Capture RatchetTimer IsoTimer)))
|
rlm@215
|
787 #+end_src
|
rlm@341
|
788
|
rlm@341
|
789 #+results: test-header
|
rlm@341
|
790 : com.aurellem.capture.IsoTimer
|
rlm@341
|
791
|
rlm@216
|
792 * Source Listing
|
rlm@216
|
793 - [[../src/cortex/vision.clj][cortex.vision]]
|
rlm@216
|
794 - [[../src/cortex/test/vision.clj][cortex.test.vision]]
|
rlm@216
|
795 - [[../src/cortex/video/magick2.clj][cortex.video.magick2]]
|
rlm@216
|
796 - [[../assets/Models/subtitles/worm-vision-subtitles.blend][worm-vision-subtitles.blend]]
|
rlm@216
|
797 #+html: <ul> <li> <a href="../org/sense.org">This org file</a> </li> </ul>
|
rlm@216
|
798 - [[http://hg.bortreb.com ][source-repository]]
|
rlm@216
|
799
|
rlm@35
|
800
|
rlm@273
|
801 * Next
|
rlm@273
|
802 I find some [[./hearing.org][ears]] for the creature while exploring the guts of
|
rlm@273
|
803 jMonkeyEngine's sound system.
|
rlm@24
|
804
|
rlm@212
|
805 * COMMENT Generate Source
|
rlm@34
|
806 #+begin_src clojure :tangle ../src/cortex/vision.clj
|
rlm@216
|
807 <<vision-header>>
|
rlm@216
|
808 <<pipeline-1>>
|
rlm@216
|
809 <<pipeline-2>>
|
rlm@216
|
810 <<retina>>
|
rlm@216
|
811 <<add-eye>>
|
rlm@216
|
812 <<sensitivity>>
|
rlm@216
|
813 <<eye-node>>
|
rlm@216
|
814 <<add-camera>>
|
rlm@216
|
815 <<kernel>>
|
rlm@216
|
816 <<main>>
|
rlm@216
|
817 <<display>>
|
rlm@24
|
818 #+end_src
|
rlm@24
|
819
|
rlm@68
|
820 #+begin_src clojure :tangle ../src/cortex/test/vision.clj
|
rlm@215
|
821 <<test-header>>
|
rlm@215
|
822 <<test-1>>
|
rlm@216
|
823 <<test-2>>
|
rlm@24
|
824 #+end_src
|
rlm@216
|
825
|
rlm@216
|
826 #+begin_src clojure :tangle ../src/cortex/video/magick2.clj
|
rlm@216
|
827 <<magick2>>
|
rlm@216
|
828 #+end_src
|