Mercurial > cortex
comparison org/vision.org @ 270:aa3641042958
minor formatting changes.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Tue, 14 Feb 2012 05:30:55 -0700 |
parents | e57d8c52f12f |
children | 12e6231eae8e c39b8b29a79e |
comparison
equal
deleted
inserted
replaced
268:6446e964810f | 270:aa3641042958 |
---|---|
11 # parentheses. e.g. =add-eye!=, not =(add-eye!)=. The reason for this | 11 # parentheses. e.g. =add-eye!=, not =(add-eye!)=. The reason for this |
12 # is that it is potentially easy to confuse the /function/ =f= with its | 12 # is that it is potentially easy to confuse the /function/ =f= with its |
13 # /value/ at a particular point =(f x)=. Mathematicians have this | 13 # /value/ at a particular point =(f x)=. Mathematicians have this |
14 # problem with their notation; we don't need it in ours. | 14 # problem with their notation; we don't need it in ours. |
15 | 15 |
16 #* Vision | |
17 * JMonkeyEngine natively supports multiple views of the same world. | 16 * JMonkeyEngine natively supports multiple views of the same world. |
18 | 17 |
19 Vision is one of the most important senses for humans, so I need to | 18 Vision is one of the most important senses for humans, so I need to |
20 build a simulated sense of vision for my AI. I will do this with | 19 build a simulated sense of vision for my AI. I will do this with |
21 simulated eyes. Each eye can be independely moved and should see its | 20 simulated eyes. Each eye can be independely moved and should see its |
136 entirely in terms of =BufferedImage= inputs. Just compose that | 135 entirely in terms of =BufferedImage= inputs. Just compose that |
137 =BufferedImage= algorithm with =(BufferedImage!)=. However, a vision | 136 =BufferedImage= algorithm with =(BufferedImage!)=. However, a vision |
138 processing algorithm that is entirely hosted on the GPU does not have | 137 processing algorithm that is entirely hosted on the GPU does not have |
139 to pay for this convienence. | 138 to pay for this convienence. |
140 | 139 |
141 * COMMENT asdasd | |
142 | |
143 (vision creature) will take an optional :skip argument which will | |
144 inform the continuations in scene processor to skip the given | |
145 number of cycles 0 means that no cycles will be skipped. | |
146 | |
147 (vision creature) will return [init-functions sensor-functions]. | |
148 The init-functions are each single-arg functions that take the | |
149 world and register the cameras and must each be called before the | |
150 corresponding sensor-functions. Each init-function returns the | |
151 viewport for that eye which can be manipulated, saved, etc. Each | |
152 sensor-function is a thunk and will return data in the same | |
153 format as the tactile-sensor functions the structure is | |
154 [topology, sensor-data]. Internally, these sensor-functions | |
155 maintain a reference to sensor-data which is periodically updated | |
156 by the continuation function established by its init-function. | |
157 They can be queried every cycle, but their information may not | |
158 necessairly be different every cycle. | |
159 | |
160 # * Optical sensor arrays are described as images and stored as metadata. | |
161 * Optical sensor arrays are described with images and referenced with metadata | 140 * Optical sensor arrays are described with images and referenced with metadata |
162 The vision pipeline described above handles the flow of rendered | 141 The vision pipeline described above handles the flow of rendered |
163 images. Now, we need simulated eyes to serve as the source of these | 142 images. Now, we need simulated eyes to serve as the source of these |
164 images. | 143 images. |
165 | 144 |
196 (bind-sense target cam) cam)) | 175 (bind-sense target cam) cam)) |
197 #+end_src | 176 #+end_src |
198 | 177 |
199 Here, the camera is created based on metadata on the eye-node and | 178 Here, the camera is created based on metadata on the eye-node and |
200 attached to the nearest physical object with =(bind-sense)= | 179 attached to the nearest physical object with =(bind-sense)= |
201 | |
202 | |
203 ** The Retina | 180 ** The Retina |
204 | 181 |
205 An eye is a surface (the retina) which contains many discrete sensors | 182 An eye is a surface (the retina) which contains many discrete sensors |
206 to detect light. These sensors have can have different light-sensing | 183 to detect light. These sensors have can have different light-sensing |
207 properties. In humans, each discrete sensor is sensitive to red, | 184 properties. In humans, each discrete sensor is sensitive to red, |
743 (:import com.jme3.scene.Node) | 720 (:import com.jme3.scene.Node) |
744 (:import com.jme3.math.Vector3f) | 721 (:import com.jme3.math.Vector3f) |
745 (:import java.io.File) | 722 (:import java.io.File) |
746 (:import (com.aurellem.capture Capture RatchetTimer))) | 723 (:import (com.aurellem.capture Capture RatchetTimer))) |
747 #+end_src | 724 #+end_src |
748 | |
749 | |
750 * Source Listing | 725 * Source Listing |
751 - [[../src/cortex/vision.clj][cortex.vision]] | 726 - [[../src/cortex/vision.clj][cortex.vision]] |
752 - [[../src/cortex/test/vision.clj][cortex.test.vision]] | 727 - [[../src/cortex/test/vision.clj][cortex.test.vision]] |
753 - [[../src/cortex/video/magick2.clj][cortex.video.magick2]] | 728 - [[../src/cortex/video/magick2.clj][cortex.video.magick2]] |
754 - [[../assets/Models/subtitles/worm-vision-subtitles.blend][worm-vision-subtitles.blend]] | 729 - [[../assets/Models/subtitles/worm-vision-subtitles.blend][worm-vision-subtitles.blend]] |