# HG changeset patch # User Robert McIntyre # Date 1328946714 25200 # Node ID 5f14fd7b12885ffab6fb93d39bf0c520418fe6a0 # Parent ac46ee4e574a309afe78ac5dccd6a86ff6c8b89c minor corrections from reviewing with dad diff -r ac46ee4e574a -r 5f14fd7b1288 org/vision.org --- a/org/vision.org Fri Feb 10 12:06:41 2012 -0700 +++ b/org/vision.org Sat Feb 11 00:51:54 2012 -0700 @@ -35,10 +35,10 @@ Each =ViewPort= can have any number of attached =SceneProcessor= objects, which are called every time a new frame is rendered. A -=SceneProcessor= recieves a =FrameBuffer= and can do whatever it wants -to the data. Often this consists of invoking GPU specific operations -on the rendered image. The =SceneProcessor= can also copy the GPU -image data to RAM and process it with the CPU. +=SceneProcessor= recieves its =ViewPort's= =FrameBuffer= and can do +whatever it wants to the data. Often this consists of invoking GPU +specific operations on the rendered image. The =SceneProcessor= can +also copy the GPU image data to RAM and process it with the CPU. * The Vision Pipeline @@ -91,7 +91,7 @@ given a =Renderer= and three containers for image data. The =FrameBuffer= references the GPU image data, but the pixel data can not be used directly on the CPU. The =ByteBuffer= and =BufferedImage= -are initially "empty" but are sized to hold to data in the +are initially "empty" but are sized to hold the data in the =FrameBuffer=. I call transfering the GPU image data to the CPU structures "mixing" the image data. I have provided three functions to do this mixing. @@ -195,7 +195,7 @@ different spatial distributions along the retina. In humans, there is a fovea in the center of the retina which has a very high density of color sensors, and a blind spot which has no sensors at all. Sensor -density decreases in proportion to distance from the retina. +density decreases in proportion to distance from the fovea. I want to be able to model any retinal configuration, so my eye-nodes in blender contain metadata pointing to images that describe the @@ -245,14 +245,14 @@ :blue 0x0000FF :green 0x00FF00} "Retinal sensitivity presets for sensors that extract one channel - (:red :blue :green) or average all channels (:gray)") + (:red :blue :green) or average all channels (:all)") #+end_src ** Metadata Processing =(retina-sensor-profile)= extracts a map from the eye-node in the same format as the example maps above. =(eye-dimensions)= finds the -dimansions of the smallest image required to contain all the retinal +dimensions of the smallest image required to contain all the retinal sensor maps. #+name: retina @@ -461,7 +461,7 @@ #+begin_src clojure (in-ns 'cortex.test.vision) -(defn test-two-eyes +(defn test-pipeline "Testing vision: Tests the vision system by creating two views of the same rotating object from different angles and displaying both of those views in @@ -693,7 +693,7 @@ (:use (cortex world sense util)) (:use clojure.contrib.def) (:import com.jme3.post.SceneProcessor) - (:import (com.jme3.util BufferUtils Screenshots)) + (:import (com.jme3.util Buffe rUtils Screenshots)) (:import java.nio.ByteBuffer) (:import java.awt.image.BufferedImage) (:import (com.jme3.renderer ViewPort Camera))