# HG changeset patch # User Dylan Holmes # Date 1329191608 21600 # Node ID e57d8c52f12f195f9c690ad16b5212e5b0a57b49 # Parent f8227f6d4ac6fd10c89c4c0b3dd7273b38d57946 More tweaks to vision. diff -r f8227f6d4ac6 -r e57d8c52f12f org/vision.org --- a/org/vision.org Mon Feb 13 07:29:29 2012 -0600 +++ b/org/vision.org Mon Feb 13 21:53:28 2012 -0600 @@ -7,6 +7,12 @@ #+INCLUDE: ../../aurellem/org/level-0.org #+babel: :mkdirp yes :noweb yes :exports both +# SUGGEST: Call functions by their name, without +# parentheses. e.g. =add-eye!=, not =(add-eye!)=. The reason for this +# is that it is potentially easy to confuse the /function/ =f= with its +# /value/ at a particular point =(f x)=. Mathematicians have this +# problem with their notation; we don't need it in ours. + #* Vision * JMonkeyEngine natively supports multiple views of the same world. @@ -37,7 +43,7 @@ is a =FrameBuffer= which represents the rendered image in the GPU. #+caption: =ViewPorts= are cameras in the world. During each frame, the =Rendermanager= records a snapshot of what each view is currently seeing. -#+attr_html:width="400" +#+ATTR_HTML: width="400" [[../images/diagram_rendermanager.png]] Each =ViewPort= can have any number of attached =SceneProcessor= @@ -151,8 +157,8 @@ They can be queried every cycle, but their information may not necessairly be different every cycle. -* Optical sensor arrays are described as images and stored as metadata. - +# * Optical sensor arrays are described as images and stored as metadata. +* Optical sensor arrays are described with images and referenced with metadata The vision pipeline described above handles the flow of rendered images. Now, we need simulated eyes to serve as the source of these images. @@ -286,7 +292,7 @@ (apply max (map second dimensions))])) #+end_src -* Putting it all together: Importing and parsing descriptions of eyes. +* Importing and parsing descriptions of eyes. First off, get the children of the "eyes" empty node to find all the eyes the creature has. #+name: eye-node @@ -413,7 +419,7 @@ simulation or the simulated senses, but can be annoying. =(gen-fix-display)= restores the in-simulation display. -** Vision! +** The =vision!= function creates sensory probes. All the hard work has been done; all that remains is to apply =(vision-kernel)= to each eye in the creature and gather the results @@ -431,8 +437,8 @@ (vision-kernel creature eye)))) #+end_src -** Visualization of Vision - +** Displaying visual data for debugging. +# Visualization of Vision. Maybe less alliteration would be better. It's vital to have a visual representation for each sense. Here I use =(view-sense)= to construct a function that will create a display for visual data. @@ -690,6 +696,14 @@ ffmpeg -r 25 -b 9001k -i out/%07d.png -vcodec libtheora worm-vision.ogg #+end_src +* Onward! + - As a neat bonus, this idea behind simulated vision also enables one + to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]]. + - Now that we have vision, it's time to tackle [[./hearing.org][hearing]]. + + +#+appendix + * Headers #+name: vision-header @@ -732,10 +746,6 @@ (:import (com.aurellem.capture Capture RatchetTimer))) #+end_src -* Onward! - - As a neat bonus, this idea behind simulated vision also enables one - to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]]. - - Now that we have vision, it's time to tackle [[./hearing.org][hearing]]. * Source Listing - [[../src/cortex/vision.clj][cortex.vision]]