changeset 219:5f14fd7b1288

minor corrections from reviewing with dad
author Robert McIntyre <rlm@mit.edu>
date Sat, 11 Feb 2012 00:51:54 -0700
parents ac46ee4e574a
children c5f6d880558b
files org/vision.org
diffstat 1 files changed, 10 insertions(+), 10 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/org/vision.org	Fri Feb 10 12:06:41 2012 -0700
     1.2 +++ b/org/vision.org	Sat Feb 11 00:51:54 2012 -0700
     1.3 @@ -35,10 +35,10 @@
     1.4  
     1.5  Each =ViewPort= can have any number of attached =SceneProcessor=
     1.6  objects, which are called every time a new frame is rendered. A
     1.7 -=SceneProcessor= recieves a =FrameBuffer= and can do whatever it wants
     1.8 -to the data.  Often this consists of invoking GPU specific operations
     1.9 -on the rendered image.  The =SceneProcessor= can also copy the GPU
    1.10 -image data to RAM and process it with the CPU.
    1.11 +=SceneProcessor= recieves its =ViewPort's= =FrameBuffer= and can do
    1.12 +whatever it wants to the data.  Often this consists of invoking GPU
    1.13 +specific operations on the rendered image.  The =SceneProcessor= can
    1.14 +also copy the GPU image data to RAM and process it with the CPU.
    1.15  
    1.16  * The Vision Pipeline
    1.17  
    1.18 @@ -91,7 +91,7 @@
    1.19  given a =Renderer= and three containers for image data. The
    1.20  =FrameBuffer= references the GPU image data, but the pixel data can
    1.21  not be used directly on the CPU.  The =ByteBuffer= and =BufferedImage=
    1.22 -are initially "empty" but are sized to hold to data in the
    1.23 +are initially "empty" but are sized to hold the data in the
    1.24  =FrameBuffer=. I call transfering the GPU image data to the CPU
    1.25  structures "mixing" the image data. I have provided three functions to
    1.26  do this mixing.
    1.27 @@ -195,7 +195,7 @@
    1.28  different spatial distributions along the retina. In humans, there is
    1.29  a fovea in the center of the retina which has a very high density of
    1.30  color sensors, and a blind spot which has no sensors at all. Sensor
    1.31 -density decreases in proportion to distance from the retina.
    1.32 +density decreases in proportion to distance from the fovea.
    1.33  
    1.34  I want to be able to model any retinal configuration, so my eye-nodes
    1.35  in blender contain metadata pointing to images that describe the
    1.36 @@ -245,14 +245,14 @@
    1.37     :blue   0x0000FF
    1.38     :green  0x00FF00}
    1.39    "Retinal sensitivity presets for sensors that extract one channel
    1.40 -   (:red :blue :green) or average all channels (:gray)")
    1.41 +   (:red :blue :green) or average all channels (:all)")
    1.42  #+end_src
    1.43  
    1.44  ** Metadata Processing
    1.45  
    1.46  =(retina-sensor-profile)= extracts a map from the eye-node in the same
    1.47  format as the example maps above.  =(eye-dimensions)= finds the
    1.48 -dimansions of the smallest image required to contain all the retinal
    1.49 +dimensions of the smallest image required to contain all the retinal
    1.50  sensor maps.
    1.51  
    1.52  #+name: retina
    1.53 @@ -461,7 +461,7 @@
    1.54  #+begin_src clojure
    1.55  (in-ns 'cortex.test.vision)
    1.56  
    1.57 -(defn test-two-eyes
    1.58 +(defn test-pipeline
    1.59    "Testing vision:
    1.60     Tests the vision system by creating two views of the same rotating
    1.61     object from different angles and displaying both of those views in
    1.62 @@ -693,7 +693,7 @@
    1.63    (:use (cortex world sense util))
    1.64    (:use clojure.contrib.def)
    1.65    (:import com.jme3.post.SceneProcessor)
    1.66 -  (:import (com.jme3.util BufferUtils Screenshots))
    1.67 +  (:import (com.jme3.util Buffe rUtils Screenshots))
    1.68    (:import java.nio.ByteBuffer)
    1.69    (:import java.awt.image.BufferedImage)
    1.70    (:import (com.jme3.renderer ViewPort Camera))