annotate org/sense.org @ 429:b5d0f0adf19f

improvements by Dylan!
author Robert McIntyre <rlm@mit.edu>
date Fri, 21 Mar 2014 20:56:56 -0400
parents 02cc0734a976
children 258078f78b33
rev   line source
rlm@202 1 #+title: Helper Functions / Motivations
rlm@151 2 #+author: Robert McIntyre
rlm@151 3 #+email: rlm@mit.edu
rlm@151 4 #+description: sensory utilities
rlm@151 5 #+keywords: simulation, jMonkeyEngine3, clojure, simulated senses
rlm@151 6 #+SETUPFILE: ../../aurellem/org/setup.org
rlm@151 7 #+INCLUDE: ../../aurellem/org/level-0.org
rlm@151 8
rlm@197 9 * Blender Utilities
rlm@306 10 In blender, any object can be assigned an arbitrary number of key-value
rlm@306 11 pairs which are called "Custom Properties". These are accessible in
rlm@306 12 jMonkeyEngine when blender files are imported with the
rlm@273 13 =BlenderLoader=. =meta-data= extracts these properties.
rlm@198 14
rlm@198 15 #+name: blender-1
rlm@197 16 #+begin_src clojure
rlm@338 17 (in-ns 'cortex.sense)
rlm@181 18 (defn meta-data
rlm@181 19 "Get the meta-data for a node created with blender."
rlm@181 20 [blender-node key]
rlm@151 21 (if-let [data (.getUserData blender-node "properties")]
rlm@338 22 ;; this part is to accomodate weird blender properties
rlm@338 23 ;; as well as sensible clojure maps.
rlm@338 24 (.findValue data key)
rlm@338 25 (.getUserData blender-node key)))
rlm@338 26
rlm@198 27 #+end_src
rlm@151 28
rlm@338 29 #+results: blender-1
rlm@338 30 : #'cortex.sense/meta-data
rlm@338 31
rlm@198 32 Blender uses a different coordinate system than jMonkeyEngine so it
rlm@198 33 is useful to be able to convert between the two. These only come into
rlm@198 34 play when the meta-data of a node refers to a vector in the blender
rlm@198 35 coordinate system.
rlm@198 36
rlm@198 37 #+name: blender-2
rlm@198 38 #+begin_src clojure
rlm@197 39 (defn jme-to-blender
rlm@197 40 "Convert from JME coordinates to Blender coordinates"
rlm@197 41 [#^Vector3f in]
rlm@198 42 (Vector3f. (.getX in) (- (.getZ in)) (.getY in)))
rlm@151 43
rlm@197 44 (defn blender-to-jme
rlm@197 45 "Convert from Blender coordinates to JME coordinates"
rlm@197 46 [#^Vector3f in]
rlm@198 47 (Vector3f. (.getX in) (.getZ in) (- (.getY in))))
rlm@197 48 #+end_src
rlm@197 49
rlm@198 50 * Sense Topology
rlm@198 51
rlm@198 52 Human beings are three-dimensional objects, and the nerves that
rlm@198 53 transmit data from our various sense organs to our brain are
rlm@198 54 essentially one-dimensional. This leaves up to two dimensions in which
rlm@198 55 our sensory information may flow. For example, imagine your skin: it
rlm@198 56 is a two-dimensional surface around a three-dimensional object (your
rlm@198 57 body). It has discrete touch sensors embedded at various points, and
rlm@198 58 the density of these sensors corresponds to the sensitivity of that
rlm@198 59 region of skin. Each touch sensor connects to a nerve, all of which
rlm@198 60 eventually are bundled together as they travel up the spinal cord to
rlm@198 61 the brain. Intersect the spinal nerves with a guillotining plane and
rlm@198 62 you will see all of the sensory data of the skin revealed in a roughly
rlm@198 63 circular two-dimensional image which is the cross section of the
rlm@198 64 spinal cord. Points on this image that are close together in this
rlm@198 65 circle represent touch sensors that are /probably/ close together on
rlm@306 66 the skin, although there is of course some cutting and rearrangement
rlm@198 67 that has to be done to transfer the complicated surface of the skin
rlm@198 68 onto a two dimensional image.
rlm@198 69
rlm@198 70 Most human senses consist of many discrete sensors of various
rlm@198 71 properties distributed along a surface at various densities. For
rlm@198 72 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
rlm@198 73 disks, and Ruffini's endings, which detect pressure and vibration of
rlm@198 74 various intensities. For ears, it is the stereocilia distributed
rlm@198 75 along the basilar membrane inside the cochlea; each one is sensitive
rlm@198 76 to a slightly different frequency of sound. For eyes, it is rods
rlm@198 77 and cones distributed along the surface of the retina. In each case,
rlm@198 78 we can describe the sense with a surface and a distribution of sensors
rlm@198 79 along that surface.
rlm@198 80
rlm@198 81 ** UV-maps
rlm@198 82
rlm@198 83 Blender and jMonkeyEngine already have support for exactly this sort
rlm@198 84 of data structure because it is used to "skin" models for games. It is
rlm@201 85 called [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Textures/Mapping/UV][UV-mapping]]. The three-dimensional surface of a model is cut
rlm@201 86 and smooshed until it fits on a two-dimensional image. You paint
rlm@201 87 whatever you want on that image, and when the three-dimensional shape
rlm@201 88 is rendered in a game the smooshing and cutting us reversed and the
rlm@201 89 image appears on the three-dimensional object.
rlm@198 90
rlm@198 91 To make a sense, interpret the UV-image as describing the distribution
rlm@198 92 of that senses sensors. To get different types of sensors, you can
rlm@198 93 either use a different color for each type of sensor, or use multiple
rlm@198 94 UV-maps, each labeled with that sensor type. I generally use a white
rlm@306 95 pixel to mean the presence of a sensor and a black pixel to mean the
rlm@306 96 absence of a sensor, and use one UV-map for each sensor-type within a
rlm@198 97 given sense. The paths to the images are not stored as the actual
rlm@198 98 UV-map of the blender object but are instead referenced in the
rlm@198 99 meta-data of the node.
rlm@198 100
rlm@306 101 #+CAPTION: The UV-map for an elongated icososphere. The white dots each represent a touch sensor. They are dense in the regions that describe the tip of the finger, and less dense along the dorsal side of the finger opposite the tip.
rlm@198 102 #+ATTR_HTML: width="300"
rlm@198 103 [[../images/finger-UV.png]]
rlm@198 104
rlm@198 105 #+CAPTION: Ventral side of the UV-mapped finger. Notice the density of touch sensors at the tip.
rlm@198 106 #+ATTR_HTML: width="300"
rlm@198 107 [[../images/finger-1.png]]
rlm@198 108
rlm@198 109 #+CAPTION: Side view of the UV-mapped finger.
rlm@198 110 #+ATTR_HTML: width="300"
rlm@198 111 [[../images/finger-2.png]]
rlm@198 112
rlm@198 113 #+CAPTION: Head on view of the finger. In both the head and side views you can see the divide where the touch-sensors transition from high density to low density.
rlm@198 114 #+ATTR_HTML: width="300"
rlm@198 115 [[../images/finger-3.png]]
rlm@198 116
rlm@198 117 The following code loads images and gets the locations of the white
rlm@273 118 pixels so that they can be used to create senses. =load-image= finds
rlm@198 119 images using jMonkeyEngine's asset-manager, so the image path is
rlm@198 120 expected to be relative to the =assets= directory. Thanks to Dylan
rlm@273 121 for the beautiful version of =filter-pixels=.
rlm@198 122
rlm@198 123 #+name: topology-1
rlm@197 124 #+begin_src clojure
rlm@197 125 (defn load-image
rlm@197 126 "Load an image as a BufferedImage using the asset-manager system."
rlm@197 127 [asset-relative-path]
rlm@197 128 (ImageToAwt/convert
rlm@197 129 (.getImage (.loadTexture (asset-manager) asset-relative-path))
rlm@197 130 false false 0))
rlm@151 131
rlm@181 132 (def white 0xFFFFFF)
rlm@181 133
rlm@181 134 (defn white? [rgb]
rlm@181 135 (= (bit-and white rgb) white))
rlm@181 136
rlm@151 137 (defn filter-pixels
rlm@151 138 "List the coordinates of all pixels matching pred, within the bounds
rlm@198 139 provided. If bounds are not specified then the entire image is
rlm@198 140 searched.
rlm@182 141 bounds -> [x0 y0 width height]"
rlm@151 142 {:author "Dylan Holmes"}
rlm@151 143 ([pred #^BufferedImage image]
rlm@151 144 (filter-pixels pred image [0 0 (.getWidth image) (.getHeight image)]))
rlm@151 145 ([pred #^BufferedImage image [x0 y0 width height]]
rlm@151 146 ((fn accumulate [x y matches]
rlm@151 147 (cond
rlm@151 148 (>= y (+ height y0)) matches
rlm@151 149 (>= x (+ width x0)) (recur 0 (inc y) matches)
rlm@151 150 (pred (.getRGB image x y))
rlm@151 151 (recur (inc x) y (conj matches [x y]))
rlm@151 152 :else (recur (inc x) y matches)))
rlm@151 153 x0 y0 [])))
rlm@151 154
rlm@151 155 (defn white-coordinates
rlm@151 156 "Coordinates of all the white pixels in a subset of the image."
rlm@151 157 ([#^BufferedImage image bounds]
rlm@181 158 (filter-pixels white? image bounds))
rlm@151 159 ([#^BufferedImage image]
rlm@181 160 (filter-pixels white? image)))
rlm@198 161 #+end_src
rlm@151 162
rlm@198 163 ** Topology
rlm@151 164
rlm@198 165 Information from the senses is transmitted to the brain via bundles of
rlm@198 166 axons, whether it be the optic nerve or the spinal cord. While these
rlm@306 167 bundles more or less preserve the overall topology of a sense's
rlm@306 168 two-dimensional surface, they do not preserve the precise euclidean
rlm@273 169 distances between every sensor. =collapse= is here to smoosh the
rlm@306 170 sensors described by a UV-map into a contiguous region that still
rlm@306 171 preserves the topology of the original sense.
rlm@198 172
rlm@198 173 #+name: topology-2
rlm@198 174 #+begin_src clojure
rlm@235 175 (in-ns 'cortex.sense)
rlm@235 176
rlm@151 177 (defn average [coll]
rlm@151 178 (/ (reduce + coll) (count coll)))
rlm@151 179
rlm@235 180 (defn- collapse-1d
rlm@235 181 "One dimensional helper for collapse."
rlm@151 182 [center line]
rlm@151 183 (let [length (count line)
rlm@151 184 num-above (count (filter (partial < center) line))
rlm@151 185 num-below (- length num-above)]
rlm@151 186 (range (- center num-below)
rlm@151 187 (+ center num-above))))
rlm@151 188
rlm@151 189 (defn collapse
rlm@235 190 "Take a sequence of pairs of integers and collapse them into a
rlm@306 191 contiguous bitmap with no \"holes\" or negative entries, as close to
rlm@235 192 the origin [0 0] as the shape permits. The order of the points is
rlm@235 193 preserved.
rlm@235 194
rlm@235 195 eg.
rlm@235 196 (collapse [[-5 5] [5 5] --> [[0 1] [1 1]
rlm@235 197 [-5 -5] [5 -5]]) --> [0 0] [1 0]]
rlm@235 198
rlm@235 199 (collapse [[-5 5] [-5 -5] --> [[0 1] [0 0]
rlm@235 200 [ 5 -5] [ 5 5]]) --> [1 0] [1 1]]"
rlm@151 201 [points]
rlm@151 202 (if (empty? points) []
rlm@151 203 (let
rlm@151 204 [num-points (count points)
rlm@151 205 center (vector
rlm@151 206 (int (average (map first points)))
rlm@151 207 (int (average (map first points))))
rlm@151 208 flattened
rlm@151 209 (reduce
rlm@151 210 concat
rlm@151 211 (map
rlm@151 212 (fn [column]
rlm@151 213 (map vector
rlm@151 214 (map first column)
rlm@151 215 (collapse-1d (second center)
rlm@151 216 (map second column))))
rlm@151 217 (partition-by first (sort-by first points))))
rlm@151 218 squeezed
rlm@151 219 (reduce
rlm@151 220 concat
rlm@151 221 (map
rlm@151 222 (fn [row]
rlm@151 223 (map vector
rlm@151 224 (collapse-1d (first center)
rlm@151 225 (map first row))
rlm@151 226 (map second row)))
rlm@151 227 (partition-by second (sort-by second flattened))))
rlm@182 228 relocated
rlm@151 229 (let [min-x (apply min (map first squeezed))
rlm@151 230 min-y (apply min (map second squeezed))]
rlm@151 231 (map (fn [[x y]]
rlm@151 232 [(- x min-x)
rlm@151 233 (- y min-y)])
rlm@235 234 squeezed))
rlm@306 235 point-correspondence
rlm@235 236 (zipmap (sort points) (sort relocated))
rlm@235 237
rlm@235 238 original-order
rlm@306 239 (vec (map point-correspondence points))]
rlm@235 240 original-order)))
rlm@198 241 #+end_src
rlm@198 242 * Viewing Sense Data
rlm@151 243
rlm@198 244 It's vital to /see/ the sense data to make sure that everything is
rlm@273 245 behaving as it should. =view-sense= and its helper, =view-image=
rlm@200 246 are here so that each sense can define its own way of turning
rlm@200 247 sense-data into pictures, while the actual rendering of said pictures
rlm@273 248 stays in one central place. =points->image= helps senses generate a
rlm@200 249 base image onto which they can overlay actual sense data.
rlm@198 250
rlm@199 251 #+name: view-senses
rlm@198 252 #+begin_src clojure
rlm@199 253 (in-ns 'cortex.sense)
rlm@198 254
rlm@199 255 (defn view-image
rlm@306 256 "Initializes a JPanel on which you may draw a BufferedImage.
rlm@199 257 Returns a function that accepts a BufferedImage and draws it to the
rlm@199 258 JPanel. If given a directory it will save the images as png files
rlm@199 259 starting at 0000000.png and incrementing from there."
rlm@393 260 ([#^File save title]
rlm@199 261 (let [idx (atom -1)
rlm@199 262 image
rlm@199 263 (atom
rlm@199 264 (BufferedImage. 1 1 BufferedImage/TYPE_4BYTE_ABGR))
rlm@199 265 panel
rlm@199 266 (proxy [JPanel] []
rlm@199 267 (paint
rlm@199 268 [graphics]
rlm@199 269 (proxy-super paintComponent graphics)
rlm@199 270 (.drawImage graphics @image 0 0 nil)))
rlm@393 271 frame (JFrame. title)]
rlm@199 272 (SwingUtilities/invokeLater
rlm@199 273 (fn []
rlm@199 274 (doto frame
rlm@199 275 (-> (.getContentPane) (.add panel))
rlm@199 276 (.pack)
rlm@199 277 (.setLocationRelativeTo nil)
rlm@199 278 (.setResizable true)
rlm@199 279 (.setVisible true))))
rlm@199 280 (fn [#^BufferedImage i]
rlm@199 281 (reset! image i)
rlm@199 282 (.setSize frame (+ 8 (.getWidth i)) (+ 28 (.getHeight i)))
rlm@199 283 (.repaint panel 0 0 (.getWidth i) (.getHeight i))
rlm@199 284 (if save
rlm@199 285 (ImageIO/write
rlm@199 286 i "png"
rlm@199 287 (File. save (format "%07d.png" (swap! idx inc))))))))
rlm@393 288 ([#^File save]
rlm@393 289 (view-image save "Display Image"))
rlm@199 290 ([] (view-image nil)))
rlm@199 291
rlm@199 292 (defn view-sense
rlm@199 293 "Take a kernel that produces a BufferedImage from some sense data
rlm@199 294 and return a function which takes a list of sense data, uses the
rlm@199 295 kernel to convert to images, and displays those images, each in
rlm@199 296 its own JFrame."
rlm@199 297 [sense-display-kernel]
rlm@199 298 (let [windows (atom [])]
rlm@215 299 (fn this
rlm@215 300 ([data]
rlm@215 301 (this data nil))
rlm@215 302 ([data save-to]
rlm@215 303 (if (> (count data) (count @windows))
rlm@215 304 (reset!
rlm@215 305 windows
rlm@215 306 (doall
rlm@215 307 (map
rlm@215 308 (fn [idx]
rlm@215 309 (if save-to
rlm@215 310 (let [dir (File. save-to (str idx))]
rlm@215 311 (.mkdir dir)
rlm@215 312 (view-image dir))
rlm@215 313 (view-image))) (range (count data))))))
rlm@215 314 (dorun
rlm@215 315 (map
rlm@215 316 (fn [display datum]
rlm@215 317 (display (sense-display-kernel datum)))
rlm@215 318 @windows data))))))
rlm@215 319
rlm@199 320
rlm@200 321 (defn points->image
rlm@306 322 "Take a collection of points and visualize it as a BufferedImage."
rlm@200 323 [points]
rlm@200 324 (if (empty? points)
rlm@200 325 (BufferedImage. 1 1 BufferedImage/TYPE_BYTE_BINARY)
rlm@200 326 (let [xs (vec (map first points))
rlm@200 327 ys (vec (map second points))
rlm@200 328 x0 (apply min xs)
rlm@200 329 y0 (apply min ys)
rlm@200 330 width (- (apply max xs) x0)
rlm@200 331 height (- (apply max ys) y0)
rlm@200 332 image (BufferedImage. (inc width) (inc height)
rlm@200 333 BufferedImage/TYPE_INT_RGB)]
rlm@200 334 (dorun
rlm@200 335 (for [x (range (.getWidth image))
rlm@200 336 y (range (.getHeight image))]
rlm@200 337 (.setRGB image x y 0xFF0000)))
rlm@200 338 (dorun
rlm@200 339 (for [index (range (count points))]
rlm@200 340 (.setRGB image (- (xs index) x0) (- (ys index) y0) -1)))
rlm@200 341 image)))
rlm@200 342
rlm@198 343 (defn gray
rlm@198 344 "Create a gray RGB pixel with R, G, and B set to num. num must be
rlm@198 345 between 0 and 255."
rlm@198 346 [num]
rlm@198 347 (+ num
rlm@198 348 (bit-shift-left num 8)
rlm@198 349 (bit-shift-left num 16)))
rlm@197 350 #+end_src
rlm@197 351
rlm@198 352 * Building a Sense from Nodes
rlm@198 353 My method for defining senses in blender is the following:
rlm@198 354
rlm@198 355 Senses like vision and hearing are localized to a single point
rlm@198 356 and follow a particular object around. For these:
rlm@198 357
rlm@198 358 - Create a single top-level empty node whose name is the name of the sense
rlm@198 359 - Add empty nodes which each contain meta-data relevant
rlm@198 360 to the sense, including a UV-map describing the number/distribution
rlm@306 361 of sensors if applicable.
rlm@198 362 - Make each empty-node the child of the top-level
rlm@273 363 node. =sense-nodes= below generates functions to find these children.
rlm@198 364
rlm@198 365 For touch, store the path to the UV-map which describes touch-sensors in the
rlm@198 366 meta-data of the object to which that map applies.
rlm@198 367
rlm@198 368 Each sense provides code that analyzes the Node structure of the
rlm@198 369 creature and creates sense-functions. They also modify the Node
rlm@198 370 structure if necessary.
rlm@198 371
rlm@198 372 Empty nodes created in blender have no appearance or physical presence
rlm@198 373 in jMonkeyEngine, but do appear in the scene graph. Empty nodes that
rlm@198 374 represent a sense which "follows" another geometry (like eyes and
rlm@273 375 ears) follow the closest physical object. =closest-node= finds this
rlm@198 376 closest object given the Creature and a particular empty node.
rlm@198 377
rlm@198 378 #+name: node-1
rlm@197 379 #+begin_src clojure
rlm@198 380 (defn sense-nodes
rlm@198 381 "For some senses there is a special empty blender node whose
rlm@198 382 children are considered markers for an instance of that sense. This
rlm@198 383 function generates functions to find those children, given the name
rlm@198 384 of the special parent node."
rlm@198 385 [parent-name]
rlm@198 386 (fn [#^Node creature]
rlm@198 387 (if-let [sense-node (.getChild creature parent-name)]
rlm@198 388 (seq (.getChildren sense-node))
rlm@321 389 (do ;;(println-repl "could not find" parent-name "node")
rlm@321 390 []))))
rlm@198 391
rlm@197 392 (defn closest-node
rlm@201 393 "Return the physical node in creature which is closest to the given
rlm@201 394 node."
rlm@198 395 [#^Node creature #^Node empty]
rlm@197 396 (loop [radius (float 0.01)]
rlm@197 397 (let [results (CollisionResults.)]
rlm@197 398 (.collideWith
rlm@197 399 creature
rlm@198 400 (BoundingBox. (.getWorldTranslation empty)
rlm@197 401 radius radius radius)
rlm@197 402 results)
rlm@197 403 (if-let [target (first results)]
rlm@197 404 (.getGeometry target)
rlm@197 405 (recur (float (* 2 radius)))))))
rlm@197 406
rlm@198 407 (defn world-to-local
rlm@198 408 "Convert the world coordinates into coordinates relative to the
rlm@198 409 object (i.e. local coordinates), taking into account the rotation
rlm@198 410 of object."
rlm@198 411 [#^Spatial object world-coordinate]
rlm@198 412 (.worldToLocal object world-coordinate nil))
rlm@198 413
rlm@198 414 (defn local-to-world
rlm@198 415 "Convert the local coordinates into world relative coordinates"
rlm@198 416 [#^Spatial object local-coordinate]
rlm@198 417 (.localToWorld object local-coordinate nil))
rlm@198 418 #+end_src
rlm@198 419
rlm@200 420 ** Sense Binding
rlm@200 421
rlm@273 422 =bind-sense= binds either a Camera or a Listener object to any
rlm@198 423 object so that they will follow that object no matter how it
rlm@199 424 moves. It is used to create both eyes and ears.
rlm@198 425
rlm@198 426 #+name: node-2
rlm@198 427 #+begin_src clojure
rlm@197 428 (defn bind-sense
rlm@197 429 "Bind the sense to the Spatial such that it will maintain its
rlm@197 430 current position relative to the Spatial no matter how the spatial
rlm@197 431 moves. 'sense can be either a Camera or Listener object."
rlm@197 432 [#^Spatial obj sense]
rlm@197 433 (let [sense-offset (.subtract (.getLocation sense)
rlm@197 434 (.getWorldTranslation obj))
rlm@197 435 initial-sense-rotation (Quaternion. (.getRotation sense))
rlm@197 436 base-anti-rotation (.inverse (.getWorldRotation obj))]
rlm@197 437 (.addControl
rlm@197 438 obj
rlm@197 439 (proxy [AbstractControl] []
rlm@197 440 (controlUpdate [tpf]
rlm@197 441 (let [total-rotation
rlm@197 442 (.mult base-anti-rotation (.getWorldRotation obj))]
rlm@197 443 (.setLocation
rlm@197 444 sense
rlm@197 445 (.add
rlm@197 446 (.mult total-rotation sense-offset)
rlm@197 447 (.getWorldTranslation obj)))
rlm@197 448 (.setRotation
rlm@197 449 sense
rlm@197 450 (.mult total-rotation initial-sense-rotation))))
rlm@197 451 (controlRender [_ _])))))
rlm@197 452 #+end_src
rlm@164 453
rlm@200 454 Here is some example code which shows how a camera bound to a blue box
rlm@273 455 with =bind-sense= moves as the box is buffeted by white cannonballs.
rlm@199 456
rlm@199 457 #+name: test
rlm@199 458 #+begin_src clojure
rlm@338 459 (in-ns 'cortex.test.sense)
rlm@338 460
rlm@199 461 (defn test-bind-sense
rlm@201 462 "Show a camera that stays in the same relative position to a blue
rlm@201 463 cube."
rlm@283 464 ([] (test-bind-sense false))
rlm@283 465 ([record?]
rlm@283 466 (let [eye-pos (Vector3f. 0 30 0)
rlm@283 467 rock (box 1 1 1 :color ColorRGBA/Blue
rlm@283 468 :position (Vector3f. 0 10 0)
rlm@283 469 :mass 30)
rlm@283 470 table (box 3 1 10 :color ColorRGBA/Gray :mass 0
rlm@283 471 :position (Vector3f. 0 -3 0))]
rlm@283 472 (world
rlm@283 473 (nodify [rock table])
rlm@283 474 standard-debug-controls
rlm@283 475 (fn init [world]
rlm@283 476 (let [cam (doto (.clone (.getCamera world))
rlm@283 477 (.setLocation eye-pos)
rlm@283 478 (.lookAt Vector3f/ZERO
rlm@283 479 Vector3f/UNIT_X))]
rlm@283 480 (bind-sense rock cam)
rlm@283 481 (.setTimer world (RatchetTimer. 60))
rlm@283 482 (if record?
rlm@283 483 (Capture/captureVideo
rlm@338 484 world
rlm@338 485 (File. "/home/r/proj/cortex/render/bind-sense0")))
rlm@283 486 (add-camera!
rlm@283 487 world cam
rlm@338 488 (comp
rlm@338 489 (view-image
rlm@338 490 (if record?
rlm@338 491 (File. "/home/r/proj/cortex/render/bind-sense1")))
rlm@283 492 BufferedImage!))
rlm@283 493 (add-camera! world (.getCamera world) no-op)))
rlm@283 494 no-op))))
rlm@199 495 #+end_src
rlm@199 496
rlm@199 497 #+begin_html
rlm@199 498 <video controls="controls" width="755">
rlm@199 499 <source src="../video/bind-sense.ogg" type="video/ogg"
rlm@199 500 preload="none" poster="../images/aurellem-1280x480.png" />
rlm@199 501 </video>
rlm@309 502 <br> <a href="http://youtu.be/DvoN2wWQ_6o"> YouTube </a>
rlm@199 503 #+end_html
rlm@199 504
rlm@200 505 With this, eyes are easy --- you just bind the camera closer to the
rlm@200 506 desired object, and set it to look outward instead of inward as it
rlm@200 507 does in the video.
rlm@199 508
rlm@200 509 (nb : the video was created with the following commands)
rlm@199 510
rlm@200 511 *** Combine Frames with ImageMagick
rlm@199 512 #+begin_src clojure :results silent
rlm@215 513 (ns cortex.video.magick
rlm@215 514 (:import java.io.File)
rlm@316 515 (:use clojure.java.shell))
rlm@215 516
rlm@215 517 (defn combine-images []
rlm@215 518 (let
rlm@215 519 [idx (atom -1)
rlm@215 520 left (rest
rlm@215 521 (sort
rlm@215 522 (file-seq (File. "/home/r/proj/cortex/render/bind-sense0/"))))
rlm@215 523 right (rest
rlm@215 524 (sort
rlm@215 525 (file-seq
rlm@215 526 (File. "/home/r/proj/cortex/render/bind-sense1/"))))
rlm@215 527 sub (rest
rlm@199 528 (sort
rlm@200 529 (file-seq
rlm@215 530 (File. "/home/r/proj/cortex/render/bind-senseB/"))))
rlm@215 531 sub* (concat sub (repeat 1000 (last sub)))]
rlm@215 532 (dorun
rlm@215 533 (map
rlm@215 534 (fn [im-1 im-2 sub]
rlm@215 535 (sh "convert" (.getCanonicalPath im-1)
rlm@215 536 (.getCanonicalPath im-2) "+append"
rlm@215 537 (.getCanonicalPath sub) "-append"
rlm@215 538 (.getCanonicalPath
rlm@215 539 (File. "/home/r/proj/cortex/render/bind-sense/"
rlm@215 540 (format "%07d.png" (swap! idx inc))))))
rlm@215 541 left right sub*))))
rlm@199 542 #+end_src
rlm@199 543
rlm@200 544 *** Encode Frames with ffmpeg
rlm@200 545
rlm@199 546 #+begin_src sh :results silent
rlm@199 547 cd /home/r/proj/cortex/render/
rlm@221 548 ffmpeg -r 30 -i bind-sense/%07d.png -b:v 9000k -vcodec libtheora bind-sense.ogg
rlm@199 549 #+end_src
rlm@199 550
rlm@211 551 * Headers
rlm@211 552 #+name: sense-header
rlm@197 553 #+begin_src clojure
rlm@198 554 (ns cortex.sense
rlm@198 555 "Here are functions useful in the construction of two or more
rlm@198 556 sensors/effectors."
rlm@306 557 {:author "Robert McIntyre"}
rlm@198 558 (:use (cortex world util))
rlm@198 559 (:import ij.process.ImageProcessor)
rlm@198 560 (:import jme3tools.converters.ImageToAwt)
rlm@198 561 (:import java.awt.image.BufferedImage)
rlm@198 562 (:import com.jme3.collision.CollisionResults)
rlm@198 563 (:import com.jme3.bounding.BoundingBox)
rlm@198 564 (:import (com.jme3.scene Node Spatial))
rlm@198 565 (:import com.jme3.scene.control.AbstractControl)
rlm@199 566 (:import (com.jme3.math Quaternion Vector3f))
rlm@199 567 (:import javax.imageio.ImageIO)
rlm@199 568 (:import java.io.File)
rlm@199 569 (:import (javax.swing JPanel JFrame SwingUtilities)))
rlm@198 570 #+end_src
rlm@187 571
rlm@211 572 #+name: test-header
rlm@211 573 #+begin_src clojure
rlm@211 574 (ns cortex.test.sense
rlm@211 575 (:use (cortex world util sense vision))
rlm@211 576 (:import
rlm@211 577 java.io.File
rlm@211 578 (com.jme3.math Vector3f ColorRGBA)
rlm@211 579 (com.aurellem.capture RatchetTimer Capture)))
rlm@211 580 #+end_src
rlm@211 581
rlm@198 582 * Source Listing
rlm@211 583 - [[../src/cortex/sense.clj][cortex.sense]]
rlm@211 584 - [[../src/cortex/test/sense.clj][cortex.test.sense]]
rlm@211 585 - [[../assets/Models/subtitles/subtitles.blend][subtitles.blend]]
rlm@211 586 - [[../assets/Models/subtitles/Lake_CraterLake03_sm.hdr][subtitles reflection map]]
rlm@211 587 #+html: <ul> <li> <a href="../org/sense.org">This org file</a> </li> </ul>
rlm@217 588 - [[http://hg.bortreb.com ][source-repository]]
rlm@211 589
rlm@211 590 * Next
rlm@211 591 Now that some of the preliminaries are out of the way, in the [[./body.org][next
rlm@211 592 post]] I'll create a simulated body.
rlm@198 593
rlm@187 594
rlm@151 595 * COMMENT generate source
rlm@151 596 #+begin_src clojure :tangle ../src/cortex/sense.clj
rlm@211 597 <<sense-header>>
rlm@198 598 <<blender-1>>
rlm@198 599 <<blender-2>>
rlm@198 600 <<topology-1>>
rlm@198 601 <<topology-2>>
rlm@198 602 <<node-1>>
rlm@198 603 <<node-2>>
rlm@197 604 <<view-senses>>
rlm@151 605 #+end_src
rlm@199 606
rlm@199 607 #+begin_src clojure :tangle ../src/cortex/test/sense.clj
rlm@211 608 <<test-header>>
rlm@199 609 <<test>>
rlm@199 610 #+end_src
rlm@215 611
rlm@215 612 #+begin_src clojure :tangle ../src/cortex/video/magick.clj
rlm@215 613 <<magick>>
rlm@215 614 #+end_src