diff org/vision.org @ 306:7e7f8d6d9ec5

massive spellchecking
author Robert McIntyre <rlm@mit.edu>
date Sat, 18 Feb 2012 10:59:41 -0700
parents 23aadf376e9d
children 5d448182c807
line wrap: on
line diff
     1.1 --- a/org/vision.org	Sat Feb 18 10:28:14 2012 -0700
     1.2 +++ b/org/vision.org	Sat Feb 18 10:59:41 2012 -0700
     1.3 @@ -11,11 +11,11 @@
     1.4   
     1.5  Vision is one of the most important senses for humans, so I need to
     1.6  build a simulated sense of vision for my AI. I will do this with
     1.7 -simulated eyes. Each eye can be independely moved and should see its
     1.8 +simulated eyes. Each eye can be independently moved and should see its
     1.9  own version of the world depending on where it is.
    1.10  
    1.11 -Making these simulated eyes a reality is simple bacause jMonkeyEngine
    1.12 -already conatains extensive support for multiple views of the same 3D
    1.13 +Making these simulated eyes a reality is simple because jMonkeyEngine
    1.14 +already contains extensive support for multiple views of the same 3D
    1.15  simulated world. The reason jMonkeyEngine has this support is because
    1.16  the support is necessary to create games with split-screen
    1.17  views. Multiple views are also used to create efficient
    1.18 @@ -26,7 +26,7 @@
    1.19  [[../images/goldeneye-4-player.png]]
    1.20  
    1.21  ** =ViewPorts=, =SceneProcessors=, and the =RenderManager=. 
    1.22 -# =Viewports= are cameras; =RenderManger= takes snapshots each frame. 
    1.23 +# =ViewPorts= are cameras; =RenderManger= takes snapshots each frame. 
    1.24  #* A Brief Description of jMonkeyEngine's Rendering Pipeline
    1.25  
    1.26  jMonkeyEngine allows you to create a =ViewPort=, which represents a
    1.27 @@ -35,13 +35,13 @@
    1.28  =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there
    1.29  is a =FrameBuffer= which represents the rendered image in the GPU.
    1.30  
    1.31 -#+caption: =ViewPorts= are cameras in the world. During each frame, the =Rendermanager= records a snapshot of what each view is currently seeing; these snapshots are =Framebuffer= objects.
    1.32 +#+caption: =ViewPorts= are cameras in the world. During each frame, the =RenderManager= records a snapshot of what each view is currently seeing; these snapshots are =FrameBuffer= objects.
    1.33  #+ATTR_HTML: width="400"
    1.34  [[../images/diagram_rendermanager2.png]]
    1.35  
    1.36  Each =ViewPort= can have any number of attached =SceneProcessor=
    1.37  objects, which are called every time a new frame is rendered. A
    1.38 -=SceneProcessor= recieves its =ViewPort's= =FrameBuffer= and can do
    1.39 +=SceneProcessor= receives its =ViewPort's= =FrameBuffer= and can do
    1.40  whatever it wants to the data.  Often this consists of invoking GPU
    1.41  specific operations on the rendered image.  The =SceneProcessor= can
    1.42  also copy the GPU image data to RAM and process it with the CPU.
    1.43 @@ -51,11 +51,11 @@
    1.44  
    1.45  Each eye in the simulated creature needs its own =ViewPort= so that
    1.46  it can see the world from its own perspective. To this =ViewPort=, I
    1.47 -add a =SceneProcessor= that feeds the visual data to any arbitray
    1.48 +add a =SceneProcessor= that feeds the visual data to any arbitrary
    1.49  continuation function for further processing.  That continuation
    1.50  function may perform both CPU and GPU operations on the data. To make
    1.51  this easy for the continuation function, the =SceneProcessor=
    1.52 -maintains appropriatly sized buffers in RAM to hold the data.  It does
    1.53 +maintains appropriately sized buffers in RAM to hold the data.  It does
    1.54  not do any copying from the GPU to the CPU itself because it is a slow
    1.55  operation.
    1.56  
    1.57 @@ -65,7 +65,7 @@
    1.58    "Create a SceneProcessor object which wraps a vision processing
    1.59    continuation function. The continuation is a function that takes 
    1.60    [#^Renderer r #^FrameBuffer fb #^ByteBuffer b #^BufferedImage bi],
    1.61 -  each of which has already been appropiately sized."
    1.62 +  each of which has already been appropriately sized."
    1.63    [continuation]
    1.64    (let [byte-buffer (atom nil)
    1.65  	renderer (atom nil)
    1.66 @@ -99,7 +99,7 @@
    1.67  =FrameBuffer= references the GPU image data, but the pixel data can
    1.68  not be used directly on the CPU.  The =ByteBuffer= and =BufferedImage=
    1.69  are initially "empty" but are sized to hold the data in the
    1.70 -=FrameBuffer=. I call transfering the GPU image data to the CPU
    1.71 +=FrameBuffer=. I call transferring the GPU image data to the CPU
    1.72  structures "mixing" the image data. I have provided three functions to
    1.73  do this mixing.
    1.74  
    1.75 @@ -129,7 +129,7 @@
    1.76  entirely in terms of =BufferedImage= inputs. Just compose that
    1.77  =BufferedImage= algorithm with =BufferedImage!=. However, a vision
    1.78  processing algorithm that is entirely hosted on the GPU does not have
    1.79 -to pay for this convienence.
    1.80 +to pay for this convenience.
    1.81  
    1.82  * Optical sensor arrays are described with images and referenced with metadata
    1.83  The vision pipeline described above handles the flow of rendered
    1.84 @@ -139,7 +139,7 @@
    1.85  An eye is described in blender in the same way as a joint. They are
    1.86  zero dimensional empty objects with no geometry whose local coordinate
    1.87  system determines the orientation of the resulting eye. All eyes are
    1.88 -childern of a parent node named "eyes" just as all joints have a
    1.89 +children of a parent node named "eyes" just as all joints have a
    1.90  parent named "joints". An eye binds to the nearest physical object
    1.91  with =bind-sense=.
    1.92  
    1.93 @@ -184,8 +184,8 @@
    1.94  
    1.95  I want to be able to model any retinal configuration, so my eye-nodes
    1.96  in blender contain metadata pointing to images that describe the
    1.97 -percise position of the individual sensors using white pixels. The
    1.98 -meta-data also describes the percise sensitivity to light that the
    1.99 +precise position of the individual sensors using white pixels. The
   1.100 +meta-data also describes the precise sensitivity to light that the
   1.101  sensors described in the image have.  An eye can contain any number of
   1.102  these images. For example, the metadata for an eye might look like
   1.103  this:
   1.104 @@ -215,11 +215,11 @@
   1.105  relative sensitivity to the channels red, green, and blue. These
   1.106  sensitivity values are packed into an integer in the order =|_|R|G|B|=
   1.107  in 8-bit fields. The RGB values of a pixel in the image are added
   1.108 -together with these sensitivities as linear weights. Therfore,
   1.109 +together with these sensitivities as linear weights. Therefore,
   1.110  0xFF0000 means sensitive to red only while 0xFFFFFF means sensitive to
   1.111  all colors equally (gray).
   1.112  
   1.113 -For convienence I've defined a few symbols for the more common
   1.114 +For convenience I've defined a few symbols for the more common
   1.115  sensitivity values.
   1.116  
   1.117  #+name: sensitivity
   1.118 @@ -383,10 +383,10 @@
   1.119  only once the first time any of the functions from the list returned
   1.120  by =vision-kernel= is called.  Each of the functions returned by
   1.121  =vision-kernel= also allows access to the =Viewport= through which
   1.122 -it recieves images.
   1.123 +it receives images.
   1.124  
   1.125 -The in-game display can be disrupted by all the viewports that the
   1.126 -functions greated by =vision-kernel= add. This doesn't affect the
   1.127 +The in-game display can be disrupted by all the ViewPorts that the
   1.128 +functions generated by =vision-kernel= add. This doesn't affect the
   1.129  simulation or the simulated senses, but can be annoying.
   1.130  =gen-fix-display= restores the in-simulation display.
   1.131  
   1.132 @@ -572,7 +572,7 @@
   1.133                  (light-up-everything world)
   1.134                  (speed-up world)
   1.135                  (.setTimer world timer)
   1.136 -                (display-dialated-time world timer)
   1.137 +                (display-dilated-time world timer)
   1.138                  ;; add a view from the worm's perspective
   1.139                  (if record?
   1.140                    (Capture/captureVideo
   1.141 @@ -685,7 +685,7 @@
   1.142  (ns cortex.vision
   1.143    "Simulate the sense of vision in jMonkeyEngine3. Enables multiple
   1.144    eyes from different positions to observe the same world, and pass
   1.145 -  the observed data to any arbitray function. Automatically reads
   1.146 +  the observed data to any arbitrary function. Automatically reads
   1.147    eye-nodes from specially prepared blender files and instantiates
   1.148    them in the world as actual eyes."
   1.149    {:author "Robert McIntyre"}