rlm@202: #+title: Helper Functions / Motivations
rlm@151: #+author: Robert McIntyre
rlm@151: #+email: rlm@mit.edu
rlm@151: #+description: sensory utilities
rlm@151: #+keywords: simulation, jMonkeyEngine3, clojure, simulated senses
rlm@151: #+SETUPFILE: ../../aurellem/org/setup.org
rlm@151: #+INCLUDE: ../../aurellem/org/level-0.org
rlm@151:
rlm@151:
rlm@197: * Blender Utilities
rlm@198: In blender, any object can be assigned an arbitray number of key-value
rlm@198: pairs which are called "Custom Properties". These are accessable in
rlm@198: jMonkyeEngine when blender files are imported with the
rlm@198: =BlenderLoader=. =(meta-data)= extracts these properties.
rlm@198:
rlm@198: #+name: blender-1
rlm@197: #+begin_src clojure
rlm@181: (defn meta-data
rlm@181: "Get the meta-data for a node created with blender."
rlm@181: [blender-node key]
rlm@151: (if-let [data (.getUserData blender-node "properties")]
rlm@198: (.findValue data key) nil))
rlm@198: #+end_src
rlm@151:
rlm@198: Blender uses a different coordinate system than jMonkeyEngine so it
rlm@198: is useful to be able to convert between the two. These only come into
rlm@198: play when the meta-data of a node refers to a vector in the blender
rlm@198: coordinate system.
rlm@198:
rlm@198: #+name: blender-2
rlm@198: #+begin_src clojure
rlm@197: (defn jme-to-blender
rlm@197: "Convert from JME coordinates to Blender coordinates"
rlm@197: [#^Vector3f in]
rlm@198: (Vector3f. (.getX in) (- (.getZ in)) (.getY in)))
rlm@151:
rlm@197: (defn blender-to-jme
rlm@197: "Convert from Blender coordinates to JME coordinates"
rlm@197: [#^Vector3f in]
rlm@198: (Vector3f. (.getX in) (.getZ in) (- (.getY in))))
rlm@197: #+end_src
rlm@197:
rlm@198: * Sense Topology
rlm@198:
rlm@198: Human beings are three-dimensional objects, and the nerves that
rlm@198: transmit data from our various sense organs to our brain are
rlm@198: essentially one-dimensional. This leaves up to two dimensions in which
rlm@198: our sensory information may flow. For example, imagine your skin: it
rlm@198: is a two-dimensional surface around a three-dimensional object (your
rlm@198: body). It has discrete touch sensors embedded at various points, and
rlm@198: the density of these sensors corresponds to the sensitivity of that
rlm@198: region of skin. Each touch sensor connects to a nerve, all of which
rlm@198: eventually are bundled together as they travel up the spinal cord to
rlm@198: the brain. Intersect the spinal nerves with a guillotining plane and
rlm@198: you will see all of the sensory data of the skin revealed in a roughly
rlm@198: circular two-dimensional image which is the cross section of the
rlm@198: spinal cord. Points on this image that are close together in this
rlm@198: circle represent touch sensors that are /probably/ close together on
rlm@198: the skin, although there is of course some cutting and rerangement
rlm@198: that has to be done to transfer the complicated surface of the skin
rlm@198: onto a two dimensional image.
rlm@198:
rlm@198: Most human senses consist of many discrete sensors of various
rlm@198: properties distributed along a surface at various densities. For
rlm@198: skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's
rlm@198: disks, and Ruffini's endings, which detect pressure and vibration of
rlm@198: various intensities. For ears, it is the stereocilia distributed
rlm@198: along the basilar membrane inside the cochlea; each one is sensitive
rlm@198: to a slightly different frequency of sound. For eyes, it is rods
rlm@198: and cones distributed along the surface of the retina. In each case,
rlm@198: we can describe the sense with a surface and a distribution of sensors
rlm@198: along that surface.
rlm@198:
rlm@198: ** UV-maps
rlm@198:
rlm@198: Blender and jMonkeyEngine already have support for exactly this sort
rlm@198: of data structure because it is used to "skin" models for games. It is
rlm@201: called [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Textures/Mapping/UV][UV-mapping]]. The three-dimensional surface of a model is cut
rlm@201: and smooshed until it fits on a two-dimensional image. You paint
rlm@201: whatever you want on that image, and when the three-dimensional shape
rlm@201: is rendered in a game the smooshing and cutting us reversed and the
rlm@201: image appears on the three-dimensional object.
rlm@198:
rlm@198: To make a sense, interpret the UV-image as describing the distribution
rlm@198: of that senses sensors. To get different types of sensors, you can
rlm@198: either use a different color for each type of sensor, or use multiple
rlm@198: UV-maps, each labeled with that sensor type. I generally use a white
rlm@198: pixel to mean the presense of a sensor and a black pixel to mean the
rlm@198: absense of a sensor, and use one UV-map for each sensor-type within a
rlm@198: given sense. The paths to the images are not stored as the actual
rlm@198: UV-map of the blender object but are instead referenced in the
rlm@198: meta-data of the node.
rlm@198:
rlm@198: #+CAPTION: The UV-map for an enlongated icososphere. The white dots each represent a touch sensor. They are dense in the regions that describe the tip of the finger, and less dense along the dorsal side of the finger opposite the tip.
rlm@198: #+ATTR_HTML: width="300"
rlm@198: [[../images/finger-UV.png]]
rlm@198:
rlm@198: #+CAPTION: Ventral side of the UV-mapped finger. Notice the density of touch sensors at the tip.
rlm@198: #+ATTR_HTML: width="300"
rlm@198: [[../images/finger-1.png]]
rlm@198:
rlm@198: #+CAPTION: Side view of the UV-mapped finger.
rlm@198: #+ATTR_HTML: width="300"
rlm@198: [[../images/finger-2.png]]
rlm@198:
rlm@198: #+CAPTION: Head on view of the finger. In both the head and side views you can see the divide where the touch-sensors transition from high density to low density.
rlm@198: #+ATTR_HTML: width="300"
rlm@198: [[../images/finger-3.png]]
rlm@198:
rlm@198: The following code loads images and gets the locations of the white
rlm@198: pixels so that they can be used to create senses. =(load-image)= finds
rlm@198: images using jMonkeyEngine's asset-manager, so the image path is
rlm@198: expected to be relative to the =assets= directory. Thanks to Dylan
rlm@201: for the beautiful version of =(filter-pixels)=.
rlm@198:
rlm@198: #+name: topology-1
rlm@197: #+begin_src clojure
rlm@197: (defn load-image
rlm@197: "Load an image as a BufferedImage using the asset-manager system."
rlm@197: [asset-relative-path]
rlm@197: (ImageToAwt/convert
rlm@197: (.getImage (.loadTexture (asset-manager) asset-relative-path))
rlm@197: false false 0))
rlm@151:
rlm@181: (def white 0xFFFFFF)
rlm@181:
rlm@181: (defn white? [rgb]
rlm@181: (= (bit-and white rgb) white))
rlm@181:
rlm@151: (defn filter-pixels
rlm@151: "List the coordinates of all pixels matching pred, within the bounds
rlm@198: provided. If bounds are not specified then the entire image is
rlm@198: searched.
rlm@182: bounds -> [x0 y0 width height]"
rlm@151: {:author "Dylan Holmes"}
rlm@151: ([pred #^BufferedImage image]
rlm@151: (filter-pixels pred image [0 0 (.getWidth image) (.getHeight image)]))
rlm@151: ([pred #^BufferedImage image [x0 y0 width height]]
rlm@151: ((fn accumulate [x y matches]
rlm@151: (cond
rlm@151: (>= y (+ height y0)) matches
rlm@151: (>= x (+ width x0)) (recur 0 (inc y) matches)
rlm@151: (pred (.getRGB image x y))
rlm@151: (recur (inc x) y (conj matches [x y]))
rlm@151: :else (recur (inc x) y matches)))
rlm@151: x0 y0 [])))
rlm@151:
rlm@151: (defn white-coordinates
rlm@151: "Coordinates of all the white pixels in a subset of the image."
rlm@151: ([#^BufferedImage image bounds]
rlm@181: (filter-pixels white? image bounds))
rlm@151: ([#^BufferedImage image]
rlm@181: (filter-pixels white? image)))
rlm@198: #+end_src
rlm@151:
rlm@198: ** Topology
rlm@151:
rlm@198: Information from the senses is transmitted to the brain via bundles of
rlm@198: axons, whether it be the optic nerve or the spinal cord. While these
rlm@198: bundles more or less perserve the overall topology of a sense's
rlm@198: two-dimensional surface, they do not perserve the percise euclidean
rlm@198: distances between every sensor. =(collapse)= is here to smoosh the
rlm@198: sensors described by a UV-map into a contigous region that still
rlm@198: perserves the topology of the original sense.
rlm@198:
rlm@198: #+name: topology-2
rlm@198: #+begin_src clojure
rlm@151: (defn average [coll]
rlm@151: (/ (reduce + coll) (count coll)))
rlm@151:
rlm@151: (defn collapse-1d
rlm@182: "One dimensional analogue of collapse."
rlm@151: [center line]
rlm@151: (let [length (count line)
rlm@151: num-above (count (filter (partial < center) line))
rlm@151: num-below (- length num-above)]
rlm@151: (range (- center num-below)
rlm@151: (+ center num-above))))
rlm@151:
rlm@151: (defn collapse
rlm@151: "Take a set of pairs of integers and collapse them into a
rlm@182: contigous bitmap with no \"holes\"."
rlm@151: [points]
rlm@151: (if (empty? points) []
rlm@151: (let
rlm@151: [num-points (count points)
rlm@151: center (vector
rlm@151: (int (average (map first points)))
rlm@151: (int (average (map first points))))
rlm@151: flattened
rlm@151: (reduce
rlm@151: concat
rlm@151: (map
rlm@151: (fn [column]
rlm@151: (map vector
rlm@151: (map first column)
rlm@151: (collapse-1d (second center)
rlm@151: (map second column))))
rlm@151: (partition-by first (sort-by first points))))
rlm@151: squeezed
rlm@151: (reduce
rlm@151: concat
rlm@151: (map
rlm@151: (fn [row]
rlm@151: (map vector
rlm@151: (collapse-1d (first center)
rlm@151: (map first row))
rlm@151: (map second row)))
rlm@151: (partition-by second (sort-by second flattened))))
rlm@182: relocated
rlm@151: (let [min-x (apply min (map first squeezed))
rlm@151: min-y (apply min (map second squeezed))]
rlm@151: (map (fn [[x y]]
rlm@151: [(- x min-x)
rlm@151: (- y min-y)])
rlm@151: squeezed))]
rlm@182: relocated)))
rlm@198: #+end_src
rlm@198: * Viewing Sense Data
rlm@151:
rlm@198: It's vital to /see/ the sense data to make sure that everything is
rlm@200: behaving as it should. =(view-sense)= and its helper, =(view-image)=
rlm@200: are here so that each sense can define its own way of turning
rlm@200: sense-data into pictures, while the actual rendering of said pictures
rlm@200: stays in one central place. =(points->image)= helps senses generate a
rlm@200: base image onto which they can overlay actual sense data.
rlm@198:
rlm@199: #+name: view-senses
rlm@198: #+begin_src clojure
rlm@199: (in-ns 'cortex.sense)
rlm@198:
rlm@199: (defn view-image
rlm@199: "Initailizes a JPanel on which you may draw a BufferedImage.
rlm@199: Returns a function that accepts a BufferedImage and draws it to the
rlm@199: JPanel. If given a directory it will save the images as png files
rlm@199: starting at 0000000.png and incrementing from there."
rlm@199: ([#^File save]
rlm@199: (let [idx (atom -1)
rlm@199: image
rlm@199: (atom
rlm@199: (BufferedImage. 1 1 BufferedImage/TYPE_4BYTE_ABGR))
rlm@199: panel
rlm@199: (proxy [JPanel] []
rlm@199: (paint
rlm@199: [graphics]
rlm@199: (proxy-super paintComponent graphics)
rlm@199: (.drawImage graphics @image 0 0 nil)))
rlm@199: frame (JFrame. "Display Image")]
rlm@199: (SwingUtilities/invokeLater
rlm@199: (fn []
rlm@199: (doto frame
rlm@199: (-> (.getContentPane) (.add panel))
rlm@199: (.pack)
rlm@199: (.setLocationRelativeTo nil)
rlm@199: (.setResizable true)
rlm@199: (.setVisible true))))
rlm@199: (fn [#^BufferedImage i]
rlm@199: (reset! image i)
rlm@199: (.setSize frame (+ 8 (.getWidth i)) (+ 28 (.getHeight i)))
rlm@199: (.repaint panel 0 0 (.getWidth i) (.getHeight i))
rlm@199: (if save
rlm@199: (ImageIO/write
rlm@199: i "png"
rlm@199: (File. save (format "%07d.png" (swap! idx inc))))))))
rlm@199: ([] (view-image nil)))
rlm@199:
rlm@199: (defn view-sense
rlm@199: "Take a kernel that produces a BufferedImage from some sense data
rlm@199: and return a function which takes a list of sense data, uses the
rlm@199: kernel to convert to images, and displays those images, each in
rlm@199: its own JFrame."
rlm@199: [sense-display-kernel]
rlm@199: (let [windows (atom [])]
rlm@199: (fn [data]
rlm@199: (if (> (count data) (count @windows))
rlm@199: (reset!
rlm@199: windows (map (fn [_] (view-image)) (range (count data)))))
rlm@199: (dorun
rlm@199: (map
rlm@199: (fn [display datum]
rlm@199: (display (sense-display-kernel datum)))
rlm@199: @windows data)))))
rlm@199:
rlm@200: (defn points->image
rlm@200: "Take a collection of points and visuliaze it as a BufferedImage."
rlm@200: [points]
rlm@200: (if (empty? points)
rlm@200: (BufferedImage. 1 1 BufferedImage/TYPE_BYTE_BINARY)
rlm@200: (let [xs (vec (map first points))
rlm@200: ys (vec (map second points))
rlm@200: x0 (apply min xs)
rlm@200: y0 (apply min ys)
rlm@200: width (- (apply max xs) x0)
rlm@200: height (- (apply max ys) y0)
rlm@200: image (BufferedImage. (inc width) (inc height)
rlm@200: BufferedImage/TYPE_INT_RGB)]
rlm@200: (dorun
rlm@200: (for [x (range (.getWidth image))
rlm@200: y (range (.getHeight image))]
rlm@200: (.setRGB image x y 0xFF0000)))
rlm@200: (dorun
rlm@200: (for [index (range (count points))]
rlm@200: (.setRGB image (- (xs index) x0) (- (ys index) y0) -1)))
rlm@200: image)))
rlm@200:
rlm@198: (defn gray
rlm@198: "Create a gray RGB pixel with R, G, and B set to num. num must be
rlm@198: between 0 and 255."
rlm@198: [num]
rlm@198: (+ num
rlm@198: (bit-shift-left num 8)
rlm@198: (bit-shift-left num 16)))
rlm@197: #+end_src
rlm@197:
rlm@198: * Building a Sense from Nodes
rlm@198: My method for defining senses in blender is the following:
rlm@198:
rlm@198: Senses like vision and hearing are localized to a single point
rlm@198: and follow a particular object around. For these:
rlm@198:
rlm@198: - Create a single top-level empty node whose name is the name of the sense
rlm@198: - Add empty nodes which each contain meta-data relevant
rlm@198: to the sense, including a UV-map describing the number/distribution
rlm@198: of sensors if applicipable.
rlm@198: - Make each empty-node the child of the top-level
rlm@198: node. =(sense-nodes)= below generates functions to find these children.
rlm@198:
rlm@198: For touch, store the path to the UV-map which describes touch-sensors in the
rlm@198: meta-data of the object to which that map applies.
rlm@198:
rlm@198: Each sense provides code that analyzes the Node structure of the
rlm@198: creature and creates sense-functions. They also modify the Node
rlm@198: structure if necessary.
rlm@198:
rlm@198: Empty nodes created in blender have no appearance or physical presence
rlm@198: in jMonkeyEngine, but do appear in the scene graph. Empty nodes that
rlm@198: represent a sense which "follows" another geometry (like eyes and
rlm@198: ears) follow the closest physical object. =(closest-node)= finds this
rlm@198: closest object given the Creature and a particular empty node.
rlm@198:
rlm@198: #+name: node-1
rlm@197: #+begin_src clojure
rlm@198: (defn sense-nodes
rlm@198: "For some senses there is a special empty blender node whose
rlm@198: children are considered markers for an instance of that sense. This
rlm@198: function generates functions to find those children, given the name
rlm@198: of the special parent node."
rlm@198: [parent-name]
rlm@198: (fn [#^Node creature]
rlm@198: (if-let [sense-node (.getChild creature parent-name)]
rlm@198: (seq (.getChildren sense-node))
rlm@198: (do (println-repl "could not find" parent-name "node") []))))
rlm@198:
rlm@197: (defn closest-node
rlm@201: "Return the physical node in creature which is closest to the given
rlm@201: node."
rlm@198: [#^Node creature #^Node empty]
rlm@197: (loop [radius (float 0.01)]
rlm@197: (let [results (CollisionResults.)]
rlm@197: (.collideWith
rlm@197: creature
rlm@198: (BoundingBox. (.getWorldTranslation empty)
rlm@197: radius radius radius)
rlm@197: results)
rlm@197: (if-let [target (first results)]
rlm@197: (.getGeometry target)
rlm@197: (recur (float (* 2 radius)))))))
rlm@197:
rlm@198: (defn world-to-local
rlm@198: "Convert the world coordinates into coordinates relative to the
rlm@198: object (i.e. local coordinates), taking into account the rotation
rlm@198: of object."
rlm@198: [#^Spatial object world-coordinate]
rlm@198: (.worldToLocal object world-coordinate nil))
rlm@198:
rlm@198: (defn local-to-world
rlm@198: "Convert the local coordinates into world relative coordinates"
rlm@198: [#^Spatial object local-coordinate]
rlm@198: (.localToWorld object local-coordinate nil))
rlm@198: #+end_src
rlm@198:
rlm@200: ** Sense Binding
rlm@200:
rlm@198: =(bind-sense)= binds either a Camera or a Listener object to any
rlm@198: object so that they will follow that object no matter how it
rlm@199: moves. It is used to create both eyes and ears.
rlm@198:
rlm@198: #+name: node-2
rlm@198: #+begin_src clojure
rlm@197: (defn bind-sense
rlm@197: "Bind the sense to the Spatial such that it will maintain its
rlm@197: current position relative to the Spatial no matter how the spatial
rlm@197: moves. 'sense can be either a Camera or Listener object."
rlm@197: [#^Spatial obj sense]
rlm@197: (let [sense-offset (.subtract (.getLocation sense)
rlm@197: (.getWorldTranslation obj))
rlm@197: initial-sense-rotation (Quaternion. (.getRotation sense))
rlm@197: base-anti-rotation (.inverse (.getWorldRotation obj))]
rlm@197: (.addControl
rlm@197: obj
rlm@197: (proxy [AbstractControl] []
rlm@197: (controlUpdate [tpf]
rlm@197: (let [total-rotation
rlm@197: (.mult base-anti-rotation (.getWorldRotation obj))]
rlm@197: (.setLocation
rlm@197: sense
rlm@197: (.add
rlm@197: (.mult total-rotation sense-offset)
rlm@197: (.getWorldTranslation obj)))
rlm@197: (.setRotation
rlm@197: sense
rlm@197: (.mult total-rotation initial-sense-rotation))))
rlm@197: (controlRender [_ _])))))
rlm@197: #+end_src
rlm@164:
rlm@200: Here is some example code which shows how a camera bound to a blue box
rlm@200: with =(bind-sense)= moves as the box is buffeted by white cannonballs.
rlm@199:
rlm@199: #+name: test
rlm@199: #+begin_src clojure
rlm@199: (ns cortex.test.sense
rlm@199: (:use (cortex world util sense vision))
rlm@199: (:import
rlm@199: java.io.File
rlm@199: (com.jme3.math Vector3f ColorRGBA)
rlm@199: (com.aurellem.capture RatchetTimer Capture)))
rlm@199:
rlm@199: (defn test-bind-sense
rlm@201: "Show a camera that stays in the same relative position to a blue
rlm@201: cube."
rlm@199: []
rlm@201: (let [eye-pos (Vector3f. 0 30 0)
rlm@199: rock (box 1 1 1 :color ColorRGBA/Blue
rlm@199: :position (Vector3f. 0 10 0)
rlm@199: :mass 30)
rlm@199: table (box 3 1 10 :color ColorRGBA/Gray :mass 0
rlm@199: :position (Vector3f. 0 -3 0))]
rlm@199: (world
rlm@199: (nodify [rock table])
rlm@199: standard-debug-controls
rlm@201: (fn init [world]
rlm@201: (let [cam (doto (.clone (.getCamera world))
rlm@201: (.setLocation eye-pos)
rlm@201: (.lookAt Vector3f/ZERO
rlm@201: Vector3f/UNIT_X))]
rlm@199: (bind-sense rock cam)
rlm@199: (.setTimer world (RatchetTimer. 60))
rlm@199: (Capture/captureVideo
rlm@199: world (File. "/home/r/proj/cortex/render/bind-sense0"))
rlm@199: (add-camera!
rlm@199: world cam
rlm@199: (comp (view-image
rlm@199: (File. "/home/r/proj/cortex/render/bind-sense1"))
rlm@199: BufferedImage!))
rlm@199: (add-camera! world (.getCamera world) no-op)))
rlm@199: no-op)))
rlm@199: #+end_src
rlm@199:
rlm@199: #+begin_html
rlm@199:
rlm@199: #+end_html
rlm@199:
rlm@200: With this, eyes are easy --- you just bind the camera closer to the
rlm@200: desired object, and set it to look outward instead of inward as it
rlm@200: does in the video.
rlm@199:
rlm@200: (nb : the video was created with the following commands)
rlm@199:
rlm@200: *** Combine Frames with ImageMagick
rlm@199: #+begin_src clojure :results silent
rlm@199: (in-ns 'user)
rlm@199: (import java.io.File)
rlm@199: (use 'clojure.contrib.shell-out)
rlm@199: (let
rlm@199: [idx (atom -1)
rlm@199: left (rest
rlm@199: (sort
rlm@199: (file-seq (File. "/home/r/proj/cortex/render/bind-sense0/"))))
rlm@199: right (rest
rlm@199: (sort
rlm@200: (file-seq
rlm@200: (File. "/home/r/proj/cortex/render/bind-sense1/"))))
rlm@200: sub (rest
rlm@200: (sort
rlm@200: (file-seq
rlm@200: (File. "/home/r/proj/cortex/render/bind-senseB/"))))
rlm@200: sub* (concat sub (repeat 1000 (last sub)))]
rlm@199: (dorun
rlm@199: (map
rlm@200: (fn [im-1 im-2 sub]
rlm@199: (sh "convert" (.getCanonicalPath im-1)
rlm@199: (.getCanonicalPath im-2) "+append"
rlm@200: (.getCanonicalPath sub) "-append"
rlm@199: (.getCanonicalPath
rlm@199: (File. "/home/r/proj/cortex/render/bind-sense/"
rlm@199: (format "%07d.png" (swap! idx inc))))))
rlm@200: left right sub*)))
rlm@199: #+end_src
rlm@199:
rlm@200: *** Encode Frames with ffmpeg
rlm@200:
rlm@199: #+begin_src sh :results silent
rlm@199: cd /home/r/proj/cortex/render/
rlm@199: ffmpeg -r 60 -b 9000k -i bind-sense/%07d.png bind-sense.ogg
rlm@199: #+end_src
rlm@199:
rlm@198: * Bookkeeping
rlm@198: Here is the header for this namespace, included for completness.
rlm@199: #+name: header
rlm@197: #+begin_src clojure
rlm@198: (ns cortex.sense
rlm@198: "Here are functions useful in the construction of two or more
rlm@198: sensors/effectors."
rlm@198: {:author "Robert McInytre"}
rlm@198: (:use (cortex world util))
rlm@198: (:import ij.process.ImageProcessor)
rlm@198: (:import jme3tools.converters.ImageToAwt)
rlm@198: (:import java.awt.image.BufferedImage)
rlm@198: (:import com.jme3.collision.CollisionResults)
rlm@198: (:import com.jme3.bounding.BoundingBox)
rlm@198: (:import (com.jme3.scene Node Spatial))
rlm@198: (:import com.jme3.scene.control.AbstractControl)
rlm@199: (:import (com.jme3.math Quaternion Vector3f))
rlm@199: (:import javax.imageio.ImageIO)
rlm@199: (:import java.io.File)
rlm@199: (:import (javax.swing JPanel JFrame SwingUtilities)))
rlm@198: #+end_src
rlm@187:
rlm@198: * Source Listing
rlm@198: Full source: [[../src/cortex/sense.clj][sense.clj]]
rlm@198:
rlm@187:
rlm@151: * COMMENT generate source
rlm@151: #+begin_src clojure :tangle ../src/cortex/sense.clj
rlm@197: <>
rlm@198: <>
rlm@198: <>
rlm@198: <>
rlm@198: <>
rlm@198: <>
rlm@198: <>
rlm@197: <>
rlm@151: #+end_src
rlm@199:
rlm@199: #+begin_src clojure :tangle ../src/cortex/test/sense.clj
rlm@199: <>
rlm@199: #+end_src