comparison org/vision.org @ 219:5f14fd7b1288

minor corrections from reviewing with dad
author Robert McIntyre <rlm@mit.edu>
date Sat, 11 Feb 2012 00:51:54 -0700
parents ac46ee4e574a
children 3c9724c8d86b
comparison
equal deleted inserted replaced
218:ac46ee4e574a 219:5f14fd7b1288
33 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there 33 =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
34 is a =FrameBuffer= which represents the rendered image in the GPU. 34 is a =FrameBuffer= which represents the rendered image in the GPU.
35 35
36 Each =ViewPort= can have any number of attached =SceneProcessor= 36 Each =ViewPort= can have any number of attached =SceneProcessor=
37 objects, which are called every time a new frame is rendered. A 37 objects, which are called every time a new frame is rendered. A
38 =SceneProcessor= recieves a =FrameBuffer= and can do whatever it wants 38 =SceneProcessor= recieves its =ViewPort's= =FrameBuffer= and can do
39 to the data. Often this consists of invoking GPU specific operations 39 whatever it wants to the data. Often this consists of invoking GPU
40 on the rendered image. The =SceneProcessor= can also copy the GPU 40 specific operations on the rendered image. The =SceneProcessor= can
41 image data to RAM and process it with the CPU. 41 also copy the GPU image data to RAM and process it with the CPU.
42 42
43 * The Vision Pipeline 43 * The Vision Pipeline
44 44
45 Each eye in the simulated creature needs it's own =ViewPort= so that 45 Each eye in the simulated creature needs it's own =ViewPort= so that
46 it can see the world from its own perspective. To this =ViewPort=, I 46 it can see the world from its own perspective. To this =ViewPort=, I
89 89
90 The continuation function given to =(vision-pipeline)= above will be 90 The continuation function given to =(vision-pipeline)= above will be
91 given a =Renderer= and three containers for image data. The 91 given a =Renderer= and three containers for image data. The
92 =FrameBuffer= references the GPU image data, but the pixel data can 92 =FrameBuffer= references the GPU image data, but the pixel data can
93 not be used directly on the CPU. The =ByteBuffer= and =BufferedImage= 93 not be used directly on the CPU. The =ByteBuffer= and =BufferedImage=
94 are initially "empty" but are sized to hold to data in the 94 are initially "empty" but are sized to hold the data in the
95 =FrameBuffer=. I call transfering the GPU image data to the CPU 95 =FrameBuffer=. I call transfering the GPU image data to the CPU
96 structures "mixing" the image data. I have provided three functions to 96 structures "mixing" the image data. I have provided three functions to
97 do this mixing. 97 do this mixing.
98 98
99 #+name: pipeline-2 99 #+name: pipeline-2
193 properties. In humans, each discrete sensor is sensitive to red, 193 properties. In humans, each discrete sensor is sensitive to red,
194 blue, green, or gray. These different types of sensors can have 194 blue, green, or gray. These different types of sensors can have
195 different spatial distributions along the retina. In humans, there is 195 different spatial distributions along the retina. In humans, there is
196 a fovea in the center of the retina which has a very high density of 196 a fovea in the center of the retina which has a very high density of
197 color sensors, and a blind spot which has no sensors at all. Sensor 197 color sensors, and a blind spot which has no sensors at all. Sensor
198 density decreases in proportion to distance from the retina. 198 density decreases in proportion to distance from the fovea.
199 199
200 I want to be able to model any retinal configuration, so my eye-nodes 200 I want to be able to model any retinal configuration, so my eye-nodes
201 in blender contain metadata pointing to images that describe the 201 in blender contain metadata pointing to images that describe the
202 percise position of the individual sensors using white pixels. The 202 percise position of the individual sensors using white pixels. The
203 meta-data also describes the percise sensitivity to light that the 203 meta-data also describes the percise sensitivity to light that the
243 {:all 0xFFFFFF 243 {:all 0xFFFFFF
244 :red 0xFF0000 244 :red 0xFF0000
245 :blue 0x0000FF 245 :blue 0x0000FF
246 :green 0x00FF00} 246 :green 0x00FF00}
247 "Retinal sensitivity presets for sensors that extract one channel 247 "Retinal sensitivity presets for sensors that extract one channel
248 (:red :blue :green) or average all channels (:gray)") 248 (:red :blue :green) or average all channels (:all)")
249 #+end_src 249 #+end_src
250 250
251 ** Metadata Processing 251 ** Metadata Processing
252 252
253 =(retina-sensor-profile)= extracts a map from the eye-node in the same 253 =(retina-sensor-profile)= extracts a map from the eye-node in the same
254 format as the example maps above. =(eye-dimensions)= finds the 254 format as the example maps above. =(eye-dimensions)= finds the
255 dimansions of the smallest image required to contain all the retinal 255 dimensions of the smallest image required to contain all the retinal
256 sensor maps. 256 sensor maps.
257 257
258 #+name: retina 258 #+name: retina
259 #+begin_src clojure 259 #+begin_src clojure
260 (defn retina-sensor-profile 260 (defn retina-sensor-profile
459 459
460 #+name: test-1 460 #+name: test-1
461 #+begin_src clojure 461 #+begin_src clojure
462 (in-ns 'cortex.test.vision) 462 (in-ns 'cortex.test.vision)
463 463
464 (defn test-two-eyes 464 (defn test-pipeline
465 "Testing vision: 465 "Testing vision:
466 Tests the vision system by creating two views of the same rotating 466 Tests the vision system by creating two views of the same rotating
467 object from different angles and displaying both of those views in 467 object from different angles and displaying both of those views in
468 JFrames. 468 JFrames.
469 469
691 them in the world as actual eyes." 691 them in the world as actual eyes."
692 {:author "Robert McIntyre"} 692 {:author "Robert McIntyre"}
693 (:use (cortex world sense util)) 693 (:use (cortex world sense util))
694 (:use clojure.contrib.def) 694 (:use clojure.contrib.def)
695 (:import com.jme3.post.SceneProcessor) 695 (:import com.jme3.post.SceneProcessor)
696 (:import (com.jme3.util BufferUtils Screenshots)) 696 (:import (com.jme3.util Buffe rUtils Screenshots))
697 (:import java.nio.ByteBuffer) 697 (:import java.nio.ByteBuffer)
698 (:import java.awt.image.BufferedImage) 698 (:import java.awt.image.BufferedImage)
699 (:import (com.jme3.renderer ViewPort Camera)) 699 (:import (com.jme3.renderer ViewPort Camera))
700 (:import (com.jme3.math ColorRGBA Vector3f Matrix3f)) 700 (:import (com.jme3.math ColorRGBA Vector3f Matrix3f))
701 (:import com.jme3.renderer.Renderer) 701 (:import com.jme3.renderer.Renderer)