changeset 270:aa3641042958

minor formatting changes.
author Robert McIntyre <rlm@mit.edu>
date Tue, 14 Feb 2012 05:30:55 -0700
parents 6446e964810f
children 5833b4ce877a
files org/vision.org
diffstat 1 files changed, 0 insertions(+), 25 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/org/vision.org	Tue Feb 14 05:00:02 2012 -0700
     1.2 +++ b/org/vision.org	Tue Feb 14 05:30:55 2012 -0700
     1.3 @@ -13,7 +13,6 @@
     1.4  # /value/ at a particular point =(f x)=. Mathematicians have this
     1.5  # problem with their notation; we don't need it in ours.
     1.6  
     1.7 -#* Vision
     1.8  * JMonkeyEngine natively supports multiple views of the same world.
     1.9   
    1.10  Vision is one of the most important senses for humans, so I need to
    1.11 @@ -138,26 +137,6 @@
    1.12  processing algorithm that is entirely hosted on the GPU does not have
    1.13  to pay for this convienence.
    1.14  
    1.15 -* COMMENT asdasd 
    1.16 -
    1.17 -(vision creature) will take an optional :skip argument which will
    1.18 -inform the continuations in scene processor to skip the given
    1.19 -number of cycles 0 means that no cycles will be skipped.
    1.20 -
    1.21 -(vision creature) will return [init-functions sensor-functions].
    1.22 -The init-functions are each single-arg functions that take the
    1.23 -world and register the cameras and must each be called before the
    1.24 -corresponding sensor-functions.  Each init-function returns the
    1.25 -viewport for that eye which can be manipulated, saved, etc. Each
    1.26 -sensor-function is a thunk and will return data in the same
    1.27 -format as the tactile-sensor functions the structure is
    1.28 -[topology, sensor-data]. Internally, these sensor-functions
    1.29 -maintain a reference to sensor-data which is periodically updated
    1.30 -by the continuation function established by its init-function.
    1.31 -They can be queried every cycle, but their information may not
    1.32 -necessairly be different every cycle.
    1.33 -
    1.34 -# * Optical sensor arrays are described as images and stored as metadata.
    1.35  * Optical sensor arrays are described with images and referenced with metadata
    1.36  The vision pipeline described above handles the flow of rendered
    1.37  images. Now, we need simulated eyes to serve as the source of these
    1.38 @@ -198,8 +177,6 @@
    1.39  
    1.40  Here, the camera is created based on metadata on the eye-node and
    1.41  attached to the nearest physical object with =(bind-sense)=
    1.42 -
    1.43 -
    1.44  ** The Retina
    1.45  
    1.46  An eye is a surface (the retina) which contains many discrete sensors
    1.47 @@ -745,8 +722,6 @@
    1.48    (:import java.io.File)
    1.49    (:import (com.aurellem.capture Capture RatchetTimer)))
    1.50  #+end_src
    1.51 -
    1.52 -
    1.53  * Source Listing
    1.54    - [[../src/cortex/vision.clj][cortex.vision]]
    1.55    - [[../src/cortex/test/vision.clj][cortex.test.vision]]