diff org/vision.org @ 276:54ec231dec4c

I changed Capture Video, then merged with Robert.
author Dylan Holmes <ocsenave@gmail.com>
date Wed, 15 Feb 2012 01:16:54 -0600
parents dbecd276b51a
children 23aadf376e9d
line wrap: on
line diff
     1.1 --- a/org/vision.org	Wed Feb 15 01:15:15 2012 -0600
     1.2 +++ b/org/vision.org	Wed Feb 15 01:16:54 2012 -0600
     1.3 @@ -7,12 +7,6 @@
     1.4  #+INCLUDE: ../../aurellem/org/level-0.org
     1.5  #+babel: :mkdirp yes :noweb yes :exports both
     1.6  
     1.7 -# SUGGEST: Call functions by their name, without
     1.8 -# parentheses. e.g. =add-eye!=, not =(add-eye!)=. The reason for this
     1.9 -# is that it is potentially easy to confuse the /function/ =f= with its
    1.10 -# /value/ at a particular point =(f x)=. Mathematicians have this
    1.11 -# problem with their notation; we don't need it in ours.
    1.12 -
    1.13  * JMonkeyEngine natively supports multiple views of the same world.
    1.14   
    1.15  Vision is one of the most important senses for humans, so I need to
    1.16 @@ -100,7 +94,7 @@
    1.17      (cleanup []))))
    1.18  #+end_src
    1.19  
    1.20 -The continuation function given to =(vision-pipeline)= above will be
    1.21 +The continuation function given to =vision-pipeline= above will be
    1.22  given a =Renderer= and three containers for image data. The
    1.23  =FrameBuffer= references the GPU image data, but the pixel data can
    1.24  not be used directly on the CPU.  The =ByteBuffer= and =BufferedImage=
    1.25 @@ -133,7 +127,7 @@
    1.26  
    1.27  Note that it is possible to write vision processing algorithms
    1.28  entirely in terms of =BufferedImage= inputs. Just compose that
    1.29 -=BufferedImage= algorithm with =(BufferedImage!)=. However, a vision
    1.30 +=BufferedImage= algorithm with =BufferedImage!=. However, a vision
    1.31  processing algorithm that is entirely hosted on the GPU does not have
    1.32  to pay for this convienence.
    1.33  
    1.34 @@ -147,7 +141,7 @@
    1.35  system determines the orientation of the resulting eye. All eyes are
    1.36  childern of a parent node named "eyes" just as all joints have a
    1.37  parent named "joints". An eye binds to the nearest physical object
    1.38 -with =(bind-sense=).
    1.39 +with =bind-sense=.
    1.40  
    1.41  #+name: add-eye
    1.42  #+begin_src clojure
    1.43 @@ -176,7 +170,7 @@
    1.44  #+end_src
    1.45  
    1.46  Here, the camera is created based on metadata on the eye-node and
    1.47 -attached to the nearest physical object with =(bind-sense)=
    1.48 +attached to the nearest physical object with =bind-sense=
    1.49  ** The Retina
    1.50  
    1.51  An eye is a surface (the retina) which contains many discrete sensors
    1.52 @@ -241,8 +235,8 @@
    1.53  
    1.54  ** Metadata Processing
    1.55  
    1.56 -=(retina-sensor-profile)= extracts a map from the eye-node in the same
    1.57 -format as the example maps above.  =(eye-dimensions)= finds the
    1.58 +=retina-sensor-profile= extracts a map from the eye-node in the same
    1.59 +format as the example maps above.  =eye-dimensions= finds the
    1.60  dimensions of the smallest image required to contain all the retinal
    1.61  sensor maps.
    1.62  
    1.63 @@ -281,7 +275,7 @@
    1.64    "Return the children of the creature's \"eyes\" node.")
    1.65  #+end_src
    1.66  
    1.67 -Then, add the camera created by =(add-eye!)= to the simulation by
    1.68 +Then, add the camera created by =add-eye!= to the simulation by
    1.69  creating a new viewport.
    1.70  
    1.71  #+name: add-camera
    1.72 @@ -307,7 +301,7 @@
    1.73  appropriate pixels from the rendered image and weight them by each
    1.74  sensor's sensitivity. I have the option to do this processing in
    1.75  native code for a slight gain in speed. I could also do it in the GPU
    1.76 -for a massive gain in speed. =(vision-kernel)= generates a list of
    1.77 +for a massive gain in speed. =vision-kernel= generates a list of
    1.78  such continuation functions, one for each channel of the eye.
    1.79  
    1.80  #+name: kernel
    1.81 @@ -384,22 +378,22 @@
    1.82       (add-camera! world (.getCamera world) no-op))))
    1.83  #+end_src
    1.84  
    1.85 -Note that since each of the functions generated by =(vision-kernel)=
    1.86 -shares the same =(register-eye!)= function, the eye will be registered
    1.87 +Note that since each of the functions generated by =vision-kernel=
    1.88 +shares the same =register-eye!= function, the eye will be registered
    1.89  only once the first time any of the functions from the list returned
    1.90 -by =(vision-kernel)= is called.  Each of the functions returned by
    1.91 -=(vision-kernel)= also allows access to the =Viewport= through which
    1.92 +by =vision-kernel= is called.  Each of the functions returned by
    1.93 +=vision-kernel= also allows access to the =Viewport= through which
    1.94  it recieves images.
    1.95  
    1.96  The in-game display can be disrupted by all the viewports that the
    1.97 -functions greated by =(vision-kernel)= add. This doesn't affect the
    1.98 +functions greated by =vision-kernel= add. This doesn't affect the
    1.99  simulation or the simulated senses, but can be annoying.
   1.100 -=(gen-fix-display)= restores the in-simulation display.
   1.101 +=gen-fix-display= restores the in-simulation display.
   1.102  
   1.103  ** The =vision!= function creates sensory probes.
   1.104  
   1.105  All the hard work has been done; all that remains is to apply
   1.106 -=(vision-kernel)= to each eye in the creature and gather the results
   1.107 +=vision-kernel= to each eye in the creature and gather the results
   1.108  into one list of functions.
   1.109  
   1.110  #+name: main
   1.111 @@ -417,7 +411,7 @@
   1.112  ** Displaying visual data for debugging.
   1.113  # Visualization of Vision. Maybe less alliteration would be better.
   1.114  It's vital to have a visual representation for each sense. Here I use
   1.115 -=(view-sense)= to construct a function that will create a display for
   1.116 +=view-sense= to construct a function that will create a display for
   1.117  visual data.
   1.118  
   1.119  #+name: display
   1.120 @@ -677,8 +671,6 @@
   1.121    - As a neat bonus, this idea behind simulated vision also enables one
   1.122      to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]].
   1.123    - Now that we have vision, it's time to tackle [[./hearing.org][hearing]].
   1.124 -
   1.125 -
   1.126  #+appendix
   1.127  
   1.128  * Headers
   1.129 @@ -731,6 +723,9 @@
   1.130    - [[http://hg.bortreb.com ][source-repository]]
   1.131   
   1.132  
   1.133 +* Next 
   1.134 +I find some [[./hearing.org][ears]] for the creature while exploring the guts of
   1.135 +jMonkeyEngine's sound system.
   1.136  
   1.137  * COMMENT Generate Source
   1.138  #+begin_src clojure :tangle ../src/cortex/vision.clj