annotate org/sense.org @ 334:c264ebf683b4

cleanup.
author Robert McIntyre <rlm@mit.edu>
date Fri, 20 Jul 2012 11:22:21 -0500
parents 702b5c78c2de
children d37ccb6c888f
rev   line source
rlm@202 1 #+title: Helper Functions / Motivations
rlm@151 2 #+author: Robert McIntyre
rlm@151 3 #+email: rlm@mit.edu
rlm@151 4 #+description: sensory utilities
rlm@151 5 #+keywords: simulation, jMonkeyEngine3, clojure, simulated senses
rlm@151 6 #+SETUPFILE: ../../aurellem/org/setup.org
rlm@151 7 #+INCLUDE: ../../aurellem/org/level-0.org
rlm@151 8
rlm@197 9 * Blender Utilities
rlm@306 10 In blender, any object can be assigned an arbitrary number of key-value
rlm@306 11 pairs which are called "Custom Properties". These are accessible in
rlm@306 12 jMonkeyEngine when blender files are imported with the
rlm@273 13 =BlenderLoader=. =meta-data= extracts these properties.
rlm@198 14
rlm@198 15 #+name: blender-1
rlm@197 16 #+begin_src clojure
rlm@181 17 (defn meta-data
rlm@181 18 "Get the meta-data for a node created with blender."
rlm@181 19 [blender-node key]
rlm@151 20 (if-let [data (.getUserData blender-node "properties")]
rlm@198 21 (.findValue data key) nil))
rlm@198 22 #+end_src
rlm@151 23
rlm@198 24 Blender uses a different coordinate system than jMonkeyEngine so it
rlm@198 25 is useful to be able to convert between the two. These only come into
rlm@198 26 play when the meta-data of a node refers to a vector in the blender
rlm@198 27 coordinate system.
rlm@198 28
rlm@198 29 #+name: blender-2
rlm@198 30 #+begin_src clojure
rlm@197 31 (defn jme-to-blender
rlm@197 32 "Convert from JME coordinates to Blender coordinates"
rlm@197 33 [#^Vector3f in]
rlm@198 34 (Vector3f. (.getX in) (- (.getZ in)) (.getY in)))
rlm@151 35
rlm@197 36 (defn blender-to-jme
rlm@197 37 "Convert from Blender coordinates to JME coordinates"
rlm@197 38 [#^Vector3f in]
rlm@198 39 (Vector3f. (.getX in) (.getZ in) (- (.getY in))))
rlm@197 40 #+end_src
rlm@197 41
rlm@198 42 * Sense Topology
rlm@198 43
rlm@198 44 Human beings are three-dimensional objects, and the nerves that
rlm@198 45 transmit data from our various sense organs to our brain are
rlm@198 46 essentially one-dimensional. This leaves up to two dimensions in which
rlm@198 47 our sensory information may flow. For example, imagine your skin: it
rlm@198 48 is a two-dimensional surface around a three-dimensional object (your
rlm@198 49 body). It has discrete touch sensors embedded at various points, and
rlm@198 50 the density of these sensors corresponds to the sensitivity of that
rlm@198 51 region of skin. Each touch sensor connects to a nerve, all of which
rlm@198 52 eventually are bundled together as they travel up the spinal cord to
rlm@198 53 the brain. Intersect the spinal nerves with a guillotining plane and
rlm@198 54 you will see all of the sensory data of the skin revealed in a roughly
rlm@198 55 circular two-dimensional image which is the cross section of the
rlm@198 56 spinal cord. Points on this image that are close together in this
rlm@198 57 circle represent touch sensors that are /probably/ close together on
rlm@306 58 the skin, although there is of course some cutting and rearrangement
rlm@198 59 that has to be done to transfer the complicated surface of the skin
rlm@198 60 onto a two dimensional image.
rlm@198 61
rlm@198 62 Most human senses consist of many discrete sensors of various
rlm@198 63 properties distributed along a surface at various densities. For
rlm@198 64 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
rlm@198 65 disks, and Ruffini's endings, which detect pressure and vibration of
rlm@198 66 various intensities. For ears, it is the stereocilia distributed
rlm@198 67 along the basilar membrane inside the cochlea; each one is sensitive
rlm@198 68 to a slightly different frequency of sound. For eyes, it is rods
rlm@198 69 and cones distributed along the surface of the retina. In each case,
rlm@198 70 we can describe the sense with a surface and a distribution of sensors
rlm@198 71 along that surface.
rlm@198 72
rlm@198 73 ** UV-maps
rlm@198 74
rlm@198 75 Blender and jMonkeyEngine already have support for exactly this sort
rlm@198 76 of data structure because it is used to "skin" models for games. It is
rlm@201 77 called [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Textures/Mapping/UV][UV-mapping]]. The three-dimensional surface of a model is cut
rlm@201 78 and smooshed until it fits on a two-dimensional image. You paint
rlm@201 79 whatever you want on that image, and when the three-dimensional shape
rlm@201 80 is rendered in a game the smooshing and cutting us reversed and the
rlm@201 81 image appears on the three-dimensional object.
rlm@198 82
rlm@198 83 To make a sense, interpret the UV-image as describing the distribution
rlm@198 84 of that senses sensors. To get different types of sensors, you can
rlm@198 85 either use a different color for each type of sensor, or use multiple
rlm@198 86 UV-maps, each labeled with that sensor type. I generally use a white
rlm@306 87 pixel to mean the presence of a sensor and a black pixel to mean the
rlm@306 88 absence of a sensor, and use one UV-map for each sensor-type within a
rlm@198 89 given sense. The paths to the images are not stored as the actual
rlm@198 90 UV-map of the blender object but are instead referenced in the
rlm@198 91 meta-data of the node.
rlm@198 92
rlm@306 93 #+CAPTION: The UV-map for an elongated icososphere. The white dots each represent a touch sensor. They are dense in the regions that describe the tip of the finger, and less dense along the dorsal side of the finger opposite the tip.
rlm@198 94 #+ATTR_HTML: width="300"
rlm@198 95 [[../images/finger-UV.png]]
rlm@198 96
rlm@198 97 #+CAPTION: Ventral side of the UV-mapped finger. Notice the density of touch sensors at the tip.
rlm@198 98 #+ATTR_HTML: width="300"
rlm@198 99 [[../images/finger-1.png]]
rlm@198 100
rlm@198 101 #+CAPTION: Side view of the UV-mapped finger.
rlm@198 102 #+ATTR_HTML: width="300"
rlm@198 103 [[../images/finger-2.png]]
rlm@198 104
rlm@198 105 #+CAPTION: Head on view of the finger. In both the head and side views you can see the divide where the touch-sensors transition from high density to low density.
rlm@198 106 #+ATTR_HTML: width="300"
rlm@198 107 [[../images/finger-3.png]]
rlm@198 108
rlm@198 109 The following code loads images and gets the locations of the white
rlm@273 110 pixels so that they can be used to create senses. =load-image= finds
rlm@198 111 images using jMonkeyEngine's asset-manager, so the image path is
rlm@198 112 expected to be relative to the =assets= directory. Thanks to Dylan
rlm@273 113 for the beautiful version of =filter-pixels=.
rlm@198 114
rlm@198 115 #+name: topology-1
rlm@197 116 #+begin_src clojure
rlm@197 117 (defn load-image
rlm@197 118 "Load an image as a BufferedImage using the asset-manager system."
rlm@197 119 [asset-relative-path]
rlm@197 120 (ImageToAwt/convert
rlm@197 121 (.getImage (.loadTexture (asset-manager) asset-relative-path))
rlm@197 122 false false 0))
rlm@151 123
rlm@181 124 (def white 0xFFFFFF)
rlm@181 125
rlm@181 126 (defn white? [rgb]
rlm@181 127 (= (bit-and white rgb) white))
rlm@181 128
rlm@151 129 (defn filter-pixels
rlm@151 130 "List the coordinates of all pixels matching pred, within the bounds
rlm@198 131 provided. If bounds are not specified then the entire image is
rlm@198 132 searched.
rlm@182 133 bounds -> [x0 y0 width height]"
rlm@151 134 {:author "Dylan Holmes"}
rlm@151 135 ([pred #^BufferedImage image]
rlm@151 136 (filter-pixels pred image [0 0 (.getWidth image) (.getHeight image)]))
rlm@151 137 ([pred #^BufferedImage image [x0 y0 width height]]
rlm@151 138 ((fn accumulate [x y matches]
rlm@151 139 (cond
rlm@151 140 (>= y (+ height y0)) matches
rlm@151 141 (>= x (+ width x0)) (recur 0 (inc y) matches)
rlm@151 142 (pred (.getRGB image x y))
rlm@151 143 (recur (inc x) y (conj matches [x y]))
rlm@151 144 :else (recur (inc x) y matches)))
rlm@151 145 x0 y0 [])))
rlm@151 146
rlm@151 147 (defn white-coordinates
rlm@151 148 "Coordinates of all the white pixels in a subset of the image."
rlm@151 149 ([#^BufferedImage image bounds]
rlm@181 150 (filter-pixels white? image bounds))
rlm@151 151 ([#^BufferedImage image]
rlm@181 152 (filter-pixels white? image)))
rlm@198 153 #+end_src
rlm@151 154
rlm@198 155 ** Topology
rlm@151 156
rlm@198 157 Information from the senses is transmitted to the brain via bundles of
rlm@198 158 axons, whether it be the optic nerve or the spinal cord. While these
rlm@306 159 bundles more or less preserve the overall topology of a sense's
rlm@306 160 two-dimensional surface, they do not preserve the precise euclidean
rlm@273 161 distances between every sensor. =collapse= is here to smoosh the
rlm@306 162 sensors described by a UV-map into a contiguous region that still
rlm@306 163 preserves the topology of the original sense.
rlm@198 164
rlm@198 165 #+name: topology-2
rlm@198 166 #+begin_src clojure
rlm@235 167 (in-ns 'cortex.sense)
rlm@235 168
rlm@151 169 (defn average [coll]
rlm@151 170 (/ (reduce + coll) (count coll)))
rlm@151 171
rlm@235 172 (defn- collapse-1d
rlm@235 173 "One dimensional helper for collapse."
rlm@151 174 [center line]
rlm@151 175 (let [length (count line)
rlm@151 176 num-above (count (filter (partial < center) line))
rlm@151 177 num-below (- length num-above)]
rlm@151 178 (range (- center num-below)
rlm@151 179 (+ center num-above))))
rlm@151 180
rlm@151 181 (defn collapse
rlm@235 182 "Take a sequence of pairs of integers and collapse them into a
rlm@306 183 contiguous bitmap with no \"holes\" or negative entries, as close to
rlm@235 184 the origin [0 0] as the shape permits. The order of the points is
rlm@235 185 preserved.
rlm@235 186
rlm@235 187 eg.
rlm@235 188 (collapse [[-5 5] [5 5] --> [[0 1] [1 1]
rlm@235 189 [-5 -5] [5 -5]]) --> [0 0] [1 0]]
rlm@235 190
rlm@235 191 (collapse [[-5 5] [-5 -5] --> [[0 1] [0 0]
rlm@235 192 [ 5 -5] [ 5 5]]) --> [1 0] [1 1]]"
rlm@151 193 [points]
rlm@151 194 (if (empty? points) []
rlm@151 195 (let
rlm@151 196 [num-points (count points)
rlm@151 197 center (vector
rlm@151 198 (int (average (map first points)))
rlm@151 199 (int (average (map first points))))
rlm@151 200 flattened
rlm@151 201 (reduce
rlm@151 202 concat
rlm@151 203 (map
rlm@151 204 (fn [column]
rlm@151 205 (map vector
rlm@151 206 (map first column)
rlm@151 207 (collapse-1d (second center)
rlm@151 208 (map second column))))
rlm@151 209 (partition-by first (sort-by first points))))
rlm@151 210 squeezed
rlm@151 211 (reduce
rlm@151 212 concat
rlm@151 213 (map
rlm@151 214 (fn [row]
rlm@151 215 (map vector
rlm@151 216 (collapse-1d (first center)
rlm@151 217 (map first row))
rlm@151 218 (map second row)))
rlm@151 219 (partition-by second (sort-by second flattened))))
rlm@182 220 relocated
rlm@151 221 (let [min-x (apply min (map first squeezed))
rlm@151 222 min-y (apply min (map second squeezed))]
rlm@151 223 (map (fn [[x y]]
rlm@151 224 [(- x min-x)
rlm@151 225 (- y min-y)])
rlm@235 226 squeezed))
rlm@306 227 point-correspondence
rlm@235 228 (zipmap (sort points) (sort relocated))
rlm@235 229
rlm@235 230 original-order
rlm@306 231 (vec (map point-correspondence points))]
rlm@235 232 original-order)))
rlm@198 233 #+end_src
rlm@198 234 * Viewing Sense Data
rlm@151 235
rlm@198 236 It's vital to /see/ the sense data to make sure that everything is
rlm@273 237 behaving as it should. =view-sense= and its helper, =view-image=
rlm@200 238 are here so that each sense can define its own way of turning
rlm@200 239 sense-data into pictures, while the actual rendering of said pictures
rlm@273 240 stays in one central place. =points->image= helps senses generate a
rlm@200 241 base image onto which they can overlay actual sense data.
rlm@198 242
rlm@199 243 #+name: view-senses
rlm@198 244 #+begin_src clojure
rlm@199 245 (in-ns 'cortex.sense)
rlm@198 246
rlm@199 247 (defn view-image
rlm@306 248 "Initializes a JPanel on which you may draw a BufferedImage.
rlm@199 249 Returns a function that accepts a BufferedImage and draws it to the
rlm@199 250 JPanel. If given a directory it will save the images as png files
rlm@199 251 starting at 0000000.png and incrementing from there."
rlm@199 252 ([#^File save]
rlm@199 253 (let [idx (atom -1)
rlm@199 254 image
rlm@199 255 (atom
rlm@199 256 (BufferedImage. 1 1 BufferedImage/TYPE_4BYTE_ABGR))
rlm@199 257 panel
rlm@199 258 (proxy [JPanel] []
rlm@199 259 (paint
rlm@199 260 [graphics]
rlm@199 261 (proxy-super paintComponent graphics)
rlm@199 262 (.drawImage graphics @image 0 0 nil)))
rlm@199 263 frame (JFrame. "Display Image")]
rlm@199 264 (SwingUtilities/invokeLater
rlm@199 265 (fn []
rlm@199 266 (doto frame
rlm@199 267 (-> (.getContentPane) (.add panel))
rlm@199 268 (.pack)
rlm@199 269 (.setLocationRelativeTo nil)
rlm@199 270 (.setResizable true)
rlm@199 271 (.setVisible true))))
rlm@199 272 (fn [#^BufferedImage i]
rlm@199 273 (reset! image i)
rlm@199 274 (.setSize frame (+ 8 (.getWidth i)) (+ 28 (.getHeight i)))
rlm@199 275 (.repaint panel 0 0 (.getWidth i) (.getHeight i))
rlm@199 276 (if save
rlm@199 277 (ImageIO/write
rlm@199 278 i "png"
rlm@199 279 (File. save (format "%07d.png" (swap! idx inc))))))))
rlm@199 280 ([] (view-image nil)))
rlm@199 281
rlm@199 282 (defn view-sense
rlm@199 283 "Take a kernel that produces a BufferedImage from some sense data
rlm@199 284 and return a function which takes a list of sense data, uses the
rlm@199 285 kernel to convert to images, and displays those images, each in
rlm@199 286 its own JFrame."
rlm@199 287 [sense-display-kernel]
rlm@199 288 (let [windows (atom [])]
rlm@215 289 (fn this
rlm@215 290 ([data]
rlm@215 291 (this data nil))
rlm@215 292 ([data save-to]
rlm@215 293 (if (> (count data) (count @windows))
rlm@215 294 (reset!
rlm@215 295 windows
rlm@215 296 (doall
rlm@215 297 (map
rlm@215 298 (fn [idx]
rlm@215 299 (if save-to
rlm@215 300 (let [dir (File. save-to (str idx))]
rlm@215 301 (.mkdir dir)
rlm@215 302 (view-image dir))
rlm@215 303 (view-image))) (range (count data))))))
rlm@215 304 (dorun
rlm@215 305 (map
rlm@215 306 (fn [display datum]
rlm@215 307 (display (sense-display-kernel datum)))
rlm@215 308 @windows data))))))
rlm@215 309
rlm@199 310
rlm@200 311 (defn points->image
rlm@306 312 "Take a collection of points and visualize it as a BufferedImage."
rlm@200 313 [points]
rlm@200 314 (if (empty? points)
rlm@200 315 (BufferedImage. 1 1 BufferedImage/TYPE_BYTE_BINARY)
rlm@200 316 (let [xs (vec (map first points))
rlm@200 317 ys (vec (map second points))
rlm@200 318 x0 (apply min xs)
rlm@200 319 y0 (apply min ys)
rlm@200 320 width (- (apply max xs) x0)
rlm@200 321 height (- (apply max ys) y0)
rlm@200 322 image (BufferedImage. (inc width) (inc height)
rlm@200 323 BufferedImage/TYPE_INT_RGB)]
rlm@200 324 (dorun
rlm@200 325 (for [x (range (.getWidth image))
rlm@200 326 y (range (.getHeight image))]
rlm@200 327 (.setRGB image x y 0xFF0000)))
rlm@200 328 (dorun
rlm@200 329 (for [index (range (count points))]
rlm@200 330 (.setRGB image (- (xs index) x0) (- (ys index) y0) -1)))
rlm@200 331 image)))
rlm@200 332
rlm@198 333 (defn gray
rlm@198 334 "Create a gray RGB pixel with R, G, and B set to num. num must be
rlm@198 335 between 0 and 255."
rlm@198 336 [num]
rlm@198 337 (+ num
rlm@198 338 (bit-shift-left num 8)
rlm@198 339 (bit-shift-left num 16)))
rlm@197 340 #+end_src
rlm@197 341
rlm@198 342 * Building a Sense from Nodes
rlm@198 343 My method for defining senses in blender is the following:
rlm@198 344
rlm@198 345 Senses like vision and hearing are localized to a single point
rlm@198 346 and follow a particular object around. For these:
rlm@198 347
rlm@198 348 - Create a single top-level empty node whose name is the name of the sense
rlm@198 349 - Add empty nodes which each contain meta-data relevant
rlm@198 350 to the sense, including a UV-map describing the number/distribution
rlm@306 351 of sensors if applicable.
rlm@198 352 - Make each empty-node the child of the top-level
rlm@273 353 node. =sense-nodes= below generates functions to find these children.
rlm@198 354
rlm@198 355 For touch, store the path to the UV-map which describes touch-sensors in the
rlm@198 356 meta-data of the object to which that map applies.
rlm@198 357
rlm@198 358 Each sense provides code that analyzes the Node structure of the
rlm@198 359 creature and creates sense-functions. They also modify the Node
rlm@198 360 structure if necessary.
rlm@198 361
rlm@198 362 Empty nodes created in blender have no appearance or physical presence
rlm@198 363 in jMonkeyEngine, but do appear in the scene graph. Empty nodes that
rlm@198 364 represent a sense which "follows" another geometry (like eyes and
rlm@273 365 ears) follow the closest physical object. =closest-node= finds this
rlm@198 366 closest object given the Creature and a particular empty node.
rlm@198 367
rlm@198 368 #+name: node-1
rlm@197 369 #+begin_src clojure
rlm@198 370 (defn sense-nodes
rlm@198 371 "For some senses there is a special empty blender node whose
rlm@198 372 children are considered markers for an instance of that sense. This
rlm@198 373 function generates functions to find those children, given the name
rlm@198 374 of the special parent node."
rlm@198 375 [parent-name]
rlm@198 376 (fn [#^Node creature]
rlm@198 377 (if-let [sense-node (.getChild creature parent-name)]
rlm@198 378 (seq (.getChildren sense-node))
rlm@321 379 (do ;;(println-repl "could not find" parent-name "node")
rlm@321 380 []))))
rlm@198 381
rlm@197 382 (defn closest-node
rlm@201 383 "Return the physical node in creature which is closest to the given
rlm@201 384 node."
rlm@198 385 [#^Node creature #^Node empty]
rlm@197 386 (loop [radius (float 0.01)]
rlm@197 387 (let [results (CollisionResults.)]
rlm@197 388 (.collideWith
rlm@197 389 creature
rlm@198 390 (BoundingBox. (.getWorldTranslation empty)
rlm@197 391 radius radius radius)
rlm@197 392 results)
rlm@197 393 (if-let [target (first results)]
rlm@197 394 (.getGeometry target)
rlm@197 395 (recur (float (* 2 radius)))))))
rlm@197 396
rlm@198 397 (defn world-to-local
rlm@198 398 "Convert the world coordinates into coordinates relative to the
rlm@198 399 object (i.e. local coordinates), taking into account the rotation
rlm@198 400 of object."
rlm@198 401 [#^Spatial object world-coordinate]
rlm@198 402 (.worldToLocal object world-coordinate nil))
rlm@198 403
rlm@198 404 (defn local-to-world
rlm@198 405 "Convert the local coordinates into world relative coordinates"
rlm@198 406 [#^Spatial object local-coordinate]
rlm@198 407 (.localToWorld object local-coordinate nil))
rlm@198 408 #+end_src
rlm@198 409
rlm@200 410 ** Sense Binding
rlm@200 411
rlm@273 412 =bind-sense= binds either a Camera or a Listener object to any
rlm@198 413 object so that they will follow that object no matter how it
rlm@199 414 moves. It is used to create both eyes and ears.
rlm@198 415
rlm@198 416 #+name: node-2
rlm@198 417 #+begin_src clojure
rlm@197 418 (defn bind-sense
rlm@197 419 "Bind the sense to the Spatial such that it will maintain its
rlm@197 420 current position relative to the Spatial no matter how the spatial
rlm@197 421 moves. 'sense can be either a Camera or Listener object."
rlm@197 422 [#^Spatial obj sense]
rlm@197 423 (let [sense-offset (.subtract (.getLocation sense)
rlm@197 424 (.getWorldTranslation obj))
rlm@197 425 initial-sense-rotation (Quaternion. (.getRotation sense))
rlm@197 426 base-anti-rotation (.inverse (.getWorldRotation obj))]
rlm@197 427 (.addControl
rlm@197 428 obj
rlm@197 429 (proxy [AbstractControl] []
rlm@197 430 (controlUpdate [tpf]
rlm@197 431 (let [total-rotation
rlm@197 432 (.mult base-anti-rotation (.getWorldRotation obj))]
rlm@197 433 (.setLocation
rlm@197 434 sense
rlm@197 435 (.add
rlm@197 436 (.mult total-rotation sense-offset)
rlm@197 437 (.getWorldTranslation obj)))
rlm@197 438 (.setRotation
rlm@197 439 sense
rlm@197 440 (.mult total-rotation initial-sense-rotation))))
rlm@197 441 (controlRender [_ _])))))
rlm@197 442 #+end_src
rlm@164 443
rlm@200 444 Here is some example code which shows how a camera bound to a blue box
rlm@273 445 with =bind-sense= moves as the box is buffeted by white cannonballs.
rlm@199 446
rlm@199 447 #+name: test
rlm@199 448 #+begin_src clojure
rlm@199 449 (defn test-bind-sense
rlm@201 450 "Show a camera that stays in the same relative position to a blue
rlm@201 451 cube."
rlm@283 452 ([] (test-bind-sense false))
rlm@283 453 ([record?]
rlm@283 454 (let [eye-pos (Vector3f. 0 30 0)
rlm@283 455 rock (box 1 1 1 :color ColorRGBA/Blue
rlm@283 456 :position (Vector3f. 0 10 0)
rlm@283 457 :mass 30)
rlm@283 458 table (box 3 1 10 :color ColorRGBA/Gray :mass 0
rlm@283 459 :position (Vector3f. 0 -3 0))]
rlm@283 460 (world
rlm@283 461 (nodify [rock table])
rlm@283 462 standard-debug-controls
rlm@283 463 (fn init [world]
rlm@283 464 (let [cam (doto (.clone (.getCamera world))
rlm@283 465 (.setLocation eye-pos)
rlm@283 466 (.lookAt Vector3f/ZERO
rlm@283 467 Vector3f/UNIT_X))]
rlm@283 468 (bind-sense rock cam)
rlm@283 469 (.setTimer world (RatchetTimer. 60))
rlm@283 470 (if record?
rlm@283 471 (Capture/captureVideo
rlm@283 472 world (File. "/home/r/proj/cortex/render/bind-sense0")))
rlm@283 473 (add-camera!
rlm@283 474 world cam
rlm@283 475 (comp (view-image
rlm@283 476 (if record?
rlm@283 477 (File. "/home/r/proj/cortex/render/bind-sense1")))
rlm@283 478 BufferedImage!))
rlm@283 479 (add-camera! world (.getCamera world) no-op)))
rlm@283 480 no-op))))
rlm@199 481 #+end_src
rlm@199 482
rlm@199 483 #+begin_html
rlm@199 484 <video controls="controls" width="755">
rlm@199 485 <source src="../video/bind-sense.ogg" type="video/ogg"
rlm@199 486 preload="none" poster="../images/aurellem-1280x480.png" />
rlm@199 487 </video>
rlm@309 488 <br> <a href="http://youtu.be/DvoN2wWQ_6o"> YouTube </a>
rlm@199 489 #+end_html
rlm@199 490
rlm@200 491 With this, eyes are easy --- you just bind the camera closer to the
rlm@200 492 desired object, and set it to look outward instead of inward as it
rlm@200 493 does in the video.
rlm@199 494
rlm@200 495 (nb : the video was created with the following commands)
rlm@199 496
rlm@200 497 *** Combine Frames with ImageMagick
rlm@199 498 #+begin_src clojure :results silent
rlm@215 499 (ns cortex.video.magick
rlm@215 500 (:import java.io.File)
rlm@316 501 (:use clojure.java.shell))
rlm@215 502
rlm@215 503 (defn combine-images []
rlm@215 504 (let
rlm@215 505 [idx (atom -1)
rlm@215 506 left (rest
rlm@215 507 (sort
rlm@215 508 (file-seq (File. "/home/r/proj/cortex/render/bind-sense0/"))))
rlm@215 509 right (rest
rlm@215 510 (sort
rlm@215 511 (file-seq
rlm@215 512 (File. "/home/r/proj/cortex/render/bind-sense1/"))))
rlm@215 513 sub (rest
rlm@199 514 (sort
rlm@200 515 (file-seq
rlm@215 516 (File. "/home/r/proj/cortex/render/bind-senseB/"))))
rlm@215 517 sub* (concat sub (repeat 1000 (last sub)))]
rlm@215 518 (dorun
rlm@215 519 (map
rlm@215 520 (fn [im-1 im-2 sub]
rlm@215 521 (sh "convert" (.getCanonicalPath im-1)
rlm@215 522 (.getCanonicalPath im-2) "+append"
rlm@215 523 (.getCanonicalPath sub) "-append"
rlm@215 524 (.getCanonicalPath
rlm@215 525 (File. "/home/r/proj/cortex/render/bind-sense/"
rlm@215 526 (format "%07d.png" (swap! idx inc))))))
rlm@215 527 left right sub*))))
rlm@199 528 #+end_src
rlm@199 529
rlm@200 530 *** Encode Frames with ffmpeg
rlm@200 531
rlm@199 532 #+begin_src sh :results silent
rlm@199 533 cd /home/r/proj/cortex/render/
rlm@221 534 ffmpeg -r 30 -i bind-sense/%07d.png -b:v 9000k -vcodec libtheora bind-sense.ogg
rlm@199 535 #+end_src
rlm@199 536
rlm@211 537 * Headers
rlm@211 538 #+name: sense-header
rlm@197 539 #+begin_src clojure
rlm@198 540 (ns cortex.sense
rlm@198 541 "Here are functions useful in the construction of two or more
rlm@198 542 sensors/effectors."
rlm@306 543 {:author "Robert McIntyre"}
rlm@198 544 (:use (cortex world util))
rlm@198 545 (:import ij.process.ImageProcessor)
rlm@198 546 (:import jme3tools.converters.ImageToAwt)
rlm@198 547 (:import java.awt.image.BufferedImage)
rlm@198 548 (:import com.jme3.collision.CollisionResults)
rlm@198 549 (:import com.jme3.bounding.BoundingBox)
rlm@198 550 (:import (com.jme3.scene Node Spatial))
rlm@198 551 (:import com.jme3.scene.control.AbstractControl)
rlm@199 552 (:import (com.jme3.math Quaternion Vector3f))
rlm@199 553 (:import javax.imageio.ImageIO)
rlm@199 554 (:import java.io.File)
rlm@199 555 (:import (javax.swing JPanel JFrame SwingUtilities)))
rlm@198 556 #+end_src
rlm@187 557
rlm@211 558 #+name: test-header
rlm@211 559 #+begin_src clojure
rlm@211 560 (ns cortex.test.sense
rlm@211 561 (:use (cortex world util sense vision))
rlm@211 562 (:import
rlm@211 563 java.io.File
rlm@211 564 (com.jme3.math Vector3f ColorRGBA)
rlm@211 565 (com.aurellem.capture RatchetTimer Capture)))
rlm@211 566 #+end_src
rlm@211 567
rlm@198 568 * Source Listing
rlm@211 569 - [[../src/cortex/sense.clj][cortex.sense]]
rlm@211 570 - [[../src/cortex/test/sense.clj][cortex.test.sense]]
rlm@211 571 - [[../assets/Models/subtitles/subtitles.blend][subtitles.blend]]
rlm@211 572 - [[../assets/Models/subtitles/Lake_CraterLake03_sm.hdr][subtitles reflection map]]
rlm@211 573 #+html: <ul> <li> <a href="../org/sense.org">This org file</a> </li> </ul>
rlm@217 574 - [[http://hg.bortreb.com ][source-repository]]
rlm@211 575
rlm@211 576 * Next
rlm@211 577 Now that some of the preliminaries are out of the way, in the [[./body.org][next
rlm@211 578 post]] I'll create a simulated body.
rlm@198 579
rlm@187 580
rlm@151 581 * COMMENT generate source
rlm@151 582 #+begin_src clojure :tangle ../src/cortex/sense.clj
rlm@211 583 <<sense-header>>
rlm@198 584 <<blender-1>>
rlm@198 585 <<blender-2>>
rlm@198 586 <<topology-1>>
rlm@198 587 <<topology-2>>
rlm@198 588 <<node-1>>
rlm@198 589 <<node-2>>
rlm@197 590 <<view-senses>>
rlm@151 591 #+end_src
rlm@199 592
rlm@199 593 #+begin_src clojure :tangle ../src/cortex/test/sense.clj
rlm@211 594 <<test-header>>
rlm@199 595 <<test>>
rlm@199 596 #+end_src
rlm@215 597
rlm@215 598 #+begin_src clojure :tangle ../src/cortex/video/magick.clj
rlm@215 599 <<magick>>
rlm@215 600 #+end_src