# HG changeset patch # User Robert McIntyre # Date 1329220802 25200 # Node ID 6446e964810fa620d22908c8aa448a25f1512714 # Parent d487348c461cf7287e8dfc24240852ef111e3d8b# Parent bee5145ce4630d3c9883775b4977b30077b00360 merged changes. diff -r d487348c461c -r 6446e964810f images/diagram_rendermanager.png Binary file images/diagram_rendermanager.png has changed diff -r d487348c461c -r 6446e964810f org/vision.org --- a/org/vision.org Tue Feb 14 04:52:23 2012 -0700 +++ b/org/vision.org Tue Feb 14 05:00:02 2012 -0700 @@ -7,8 +7,15 @@ #+INCLUDE: ../../aurellem/org/level-0.org #+babel: :mkdirp yes :noweb yes :exports both -* Vision - +# SUGGEST: Call functions by their name, without +# parentheses. e.g. =add-eye!=, not =(add-eye!)=. The reason for this +# is that it is potentially easy to confuse the /function/ =f= with its +# /value/ at a particular point =(f x)=. Mathematicians have this +# problem with their notation; we don't need it in ours. + +#* Vision +* JMonkeyEngine natively supports multiple views of the same world. + Vision is one of the most important senses for humans, so I need to build a simulated sense of vision for my AI. I will do this with simulated eyes. Each eye can be independely moved and should see its @@ -25,7 +32,9 @@ #+caption: jMonkeyEngine supports multiple views to enable split-screen games, like GoldenEye, which was one of the first games to use split-screen views. [[../images/goldeneye-4-player.png]] -* Brief Description of jMonkeyEngine's Rendering Pipeline +** =ViewPorts=, =SceneProcessors=, and the =RenderManager=. +# =Viewports= are cameras; =RenderManger= takes snapshots each frame. +#* A Brief Description of jMonkeyEngine's Rendering Pipeline jMonkeyEngine allows you to create a =ViewPort=, which represents a view of the simulated world. You can create as many of these as you @@ -33,6 +42,10 @@ =ViewPort=, rendering the scene in the GPU. For each =ViewPort= there is a =FrameBuffer= which represents the rendered image in the GPU. +#+caption: =ViewPorts= are cameras in the world. During each frame, the =Rendermanager= records a snapshot of what each view is currently seeing. +#+ATTR_HTML: width="400" +[[../images/diagram_rendermanager.png]] + Each =ViewPort= can have any number of attached =SceneProcessor= objects, which are called every time a new frame is rendered. A =SceneProcessor= recieves its =ViewPort's= =FrameBuffer= and can do @@ -40,9 +53,10 @@ specific operations on the rendered image. The =SceneProcessor= can also copy the GPU image data to RAM and process it with the CPU. -* The Vision Pipeline +** From Views to Vision +# Appropriating Views for Vision. -Each eye in the simulated creature needs it's own =ViewPort= so that +Each eye in the simulated creature needs its own =ViewPort= so that it can see the world from its own perspective. To this =ViewPort=, I add a =SceneProcessor= that feeds the visual data to any arbitray continuation function for further processing. That continuation @@ -143,8 +157,8 @@ They can be queried every cycle, but their information may not necessairly be different every cycle. -* Physical Eyes - +# * Optical sensor arrays are described as images and stored as metadata. +* Optical sensor arrays are described with images and referenced with metadata The vision pipeline described above handles the flow of rendered images. Now, we need simulated eyes to serve as the source of these images. @@ -278,7 +292,7 @@ (apply max (map second dimensions))])) #+end_src -* Eye Creation +* Importing and parsing descriptions of eyes. First off, get the children of the "eyes" empty node to find all the eyes the creature has. #+name: eye-node @@ -405,7 +419,7 @@ simulation or the simulated senses, but can be annoying. =(gen-fix-display)= restores the in-simulation display. -** Vision! +** The =vision!= function creates sensory probes. All the hard work has been done; all that remains is to apply =(vision-kernel)= to each eye in the creature and gather the results @@ -423,8 +437,8 @@ (vision-kernel creature eye)))) #+end_src -** Visualization of Vision - +** Displaying visual data for debugging. +# Visualization of Vision. Maybe less alliteration would be better. It's vital to have a visual representation for each sense. Here I use =(view-sense)= to construct a function that will create a display for visual data. @@ -448,11 +462,11 @@ image)))) #+end_src -* Tests -** Basic Test +* Demonstrations +** Demonstrating the vision pipeline. This is a basic test for the vision system. It only tests the -vision-pipeline and does not deal with loadig eyes from a blender +vision-pipeline and does not deal with loading eyes from a blender file. The code creates two videos of the same rotating cube from different angles. @@ -512,8 +526,7 @@ Creating multiple eyes like this can be used for stereoscopic vision simulation in a single creature or for simulating multiple creatures, each with their own sense of vision. - -** Adding Vision to the Worm +** Demonstrating eye import and parsing. To the worm from the last post, I add a new node that describes its eyes. @@ -683,6 +696,14 @@ ffmpeg -r 25 -b 9001k -i out/%07d.png -vcodec libtheora worm-vision.ogg #+end_src +* Onward! + - As a neat bonus, this idea behind simulated vision also enables one + to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]]. + - Now that we have vision, it's time to tackle [[./hearing.org][hearing]]. + + +#+appendix + * Headers #+name: vision-header @@ -725,10 +746,6 @@ (:import (com.aurellem.capture Capture RatchetTimer))) #+end_src -* Onward! - - As a neat bonus, this idea behind simulated vision also enables one - to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]]. - - Now that we have vision, it's time to tackle [[./hearing.org][hearing]]. * Source Listing - [[../src/cortex/vision.clj][cortex.vision]]