changeset 273:c39b8b29a79e

fixed ambigous in-text function references
author Robert McIntyre <rlm@mit.edu>
date Wed, 15 Feb 2012 06:56:47 -0700
parents 5833b4ce877a
children dbecd276b51a
files assets/Models/heart/heart.blend org/body.org org/hearing.org org/ideas.org org/movement.org org/proprioception.org org/sense.org org/touch.org org/vision.org
diffstat 9 files changed, 72 insertions(+), 85 deletions(-) [+]
line wrap: on
line diff
     1.1 Binary file assets/Models/heart/heart.blend has changed
     2.1 --- a/org/body.org	Wed Feb 15 01:36:16 2012 -0700
     2.2 +++ b/org/body.org	Wed Feb 15 06:56:47 2012 -0700
     2.3 @@ -1,6 +1,6 @@
     2.4  #+title: Building a Body
     2.5  #+author: Robert McIntyre
     2.6 -#+email: rlm@mit.edu
     2.7 +#+email: rlm@mit.edu 
     2.8  #+description: Simulating a body (movement, touch, propioception) in jMonkeyEngine3.
     2.9  #+SETUPFILE: ../../aurellem/org/setup.org
    2.10  #+INCLUDE: ../../aurellem/org/level-0.org
    2.11 @@ -18,7 +18,7 @@
    2.12  How to create such a body? One option I ultimately rejected is to use
    2.13  blender's [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Rigging/Armatures][armature]] system. The idea would have been to define a mesh
    2.14  which describes the creature's entire body. To this you add an
    2.15 -(skeleton) which deforms this mesh. This technique is used extensively
    2.16 +skeleton which deforms this mesh. This technique is used extensively
    2.17  to model humans and create realistic animations. It is hard to use for
    2.18  my purposes because it is difficult to update the creature's Physics
    2.19  Collision Mesh in tandem with its Geometric Mesh under the influence
    2.20 @@ -140,7 +140,7 @@
    2.21              (node-seq creature)))))
    2.22  #+end_src
    2.23  
    2.24 -=(physical!)= iterates through a creature's node structure, creating
    2.25 +=physical!)= iterates through a creature's node structure, creating
    2.26  CollisionShapes for each geometry with the mass specified in that
    2.27  geometry's meta-data.
    2.28  
    2.29 @@ -220,7 +220,7 @@
    2.30  
    2.31  ** Finding the Joints 
    2.32  
    2.33 -The higher order function =(sense-nodes)= from =cortex.sense= simplifies
    2.34 +The higher order function =sense-nodes= from =cortex.sense= simplifies
    2.35  the first task.
    2.36  
    2.37  #+name: joints-2 
    2.38 @@ -232,17 +232,16 @@
    2.39    "Return the children of the creature's \"joints\" node.")
    2.40  #+end_src
    2.41  
    2.42 -
    2.43  ** Joint Targets and Orientation
    2.44  
    2.45  This technique for finding a joint's targets is very similiar to
    2.46 -=(cortex.sense/closest-node)=.  A small cube, centered around the
    2.47 +=cortex.sense/closest-node=.  A small cube, centered around the
    2.48  empty-node, grows exponentially until it intersects two /physical/
    2.49  objects. The objects are ordered according to the joint's rotation,
    2.50  with the first one being the object that has more negative coordinates
    2.51  in the joint's reference frame. Since the objects must be physical,
    2.52  the empty-node itself escapes detection. Because the objects must be
    2.53 -physical, =(joint-targets)= must be called /after/ =(physical!)= is
    2.54 +physical, =joint-targets= must be called /after/ =physical!= is
    2.55  called.
    2.56  
    2.57  #+name: joints-3
    2.58 @@ -400,7 +399,7 @@
    2.59        (println-repl "could not find joint meta-data!"))))
    2.60  #+end_src
    2.61  
    2.62 -Creating joints is now a matter of applying =(connect)= to each joint
    2.63 +Creating joints is now a matter of applying =connect= to each joint
    2.64  node.
    2.65  
    2.66  #+name: joints-5
    2.67 @@ -417,7 +416,6 @@
    2.68      (joints creature))))
    2.69  #+end_src
    2.70  
    2.71 -
    2.72  ** Round 3
    2.73  
    2.74  Now we can test the hand in all its glory.
    2.75 @@ -446,7 +444,7 @@
    2.76           no-op))
    2.77  #+end_src
    2.78  
    2.79 -=(physical!)= makes the hand solid, then =(joints!)= connects each
    2.80 +=physical!= makes the hand solid, then =joints!= connects each
    2.81  piece together. 
    2.82  
    2.83  #+begin_html
    2.84 @@ -467,7 +465,7 @@
    2.85  
    2.86  * Wrap-Up!
    2.87  
    2.88 -It is convienent to combine =(physical!)= and =(joints!)= into one
    2.89 +It is convienent to combine =physical!= and =joints!= into one
    2.90  function that completely creates the creature's physical body.
    2.91  
    2.92  #+name: joints-6
     3.1 --- a/org/hearing.org	Wed Feb 15 01:36:16 2012 -0700
     3.2 +++ b/org/hearing.org	Wed Feb 15 06:56:47 2012 -0700
     3.3 @@ -776,12 +776,12 @@
     3.4  
     3.5  ** Hearing Pipeline
     3.6  
     3.7 -All sound rendering is done in the CPU, so =(hearing-pipeline)= is
     3.8 -much less complicated than =(vision-pipelie)= The bytes available in
     3.9 +All sound rendering is done in the CPU, so =hearing-pipeline= is
    3.10 +much less complicated than =vision-pipelie= The bytes available in
    3.11  the ByteBuffer obtained from the =send= Device have different meanings
    3.12  dependant upon the particular hardware or your system.  That is why
    3.13  the =AudioFormat= object is necessary to provide the meaning that the
    3.14 -raw bytes lack. =(byteBuffer->pulse-vector)= uses the excellent
    3.15 +raw bytes lack. =byteBuffer->pulse-vector= uses the excellent
    3.16  conversion facilities from [[http://www.tritonus.org/ ][tritonus]] ([[http://tritonus.sourceforge.net/apidoc/org/tritonus/share/sampled/FloatSampleTools.html#byte2floatInterleaved%2528byte%5B%5D,%2520int,%2520float%5B%5D,%2520int,%2520int,%2520javax.sound.sampled.AudioFormat%2529][javadoc]]) to generate a clojure vector of
    3.17  floats which represent the linear PCM encoded waveform of the
    3.18  sound. With linear PCM (pulse code modulation) -1.0 represents maximum
    3.19 @@ -822,14 +822,14 @@
    3.20  
    3.21  Together, these three functions define how ears found in a specially
    3.22  prepared blender file will be translated to =Listener= objects in a
    3.23 -simulation. =(ears)= extracts all the children of to top level node
    3.24 -named "ears".  =(add-ear!)= and =(update-listener-velocity!)= use
    3.25 -=(bind-sense)= to bind a =Listener= object located at the initial
    3.26 +simulation. =ears= extracts all the children of to top level node
    3.27 +named "ears".  =add-ear!= and =update-listener-velocity!= use
    3.28 +=bind-sense= to bind a =Listener= object located at the initial
    3.29  position of an "ear" node to the closest physical object in the
    3.30  creature. That =Listener= will stay in the same orientation to the
    3.31  object with which it is bound, just as the camera in the [[http://aurellem.localhost/cortex/html/sense.html#sec-4-1][sense binding
    3.32  demonstration]].  =OpenAL= simulates the doppler effect for moving
    3.33 -listeners, =(update-listener-velocity!)= ensures that this velocity
    3.34 +listeners, =update-listener-velocity!= ensures that this velocity
    3.35  information is always up-to-date.
    3.36  
    3.37  #+name: hearing-ears
    3.38 @@ -905,7 +905,7 @@
    3.39      (hearing-kernel creature ear)))
    3.40  #+end_src
    3.41  
    3.42 -Each function returned by =(hearing-kernel!)= will register a new
    3.43 +Each function returned by =hearing-kernel!= will register a new
    3.44  =Listener= with the simulation the first time it is called.  Each time
    3.45  it is called, the hearing-function will return a vector of linear PCM
    3.46  encoded sound data that was heard since the last frame. The size of
     4.1 --- a/org/ideas.org	Wed Feb 15 01:36:16 2012 -0700
     4.2 +++ b/org/ideas.org	Wed Feb 15 06:56:47 2012 -0700
     4.3 @@ -74,8 +74,8 @@
     4.4   - [ ] send package to friends for critiques -- 2 days
     4.5   - [ ] fix videos that were encoded wrong, test on InterNet Explorer.
     4.6   - [ ] redo videos vision with new collapse code
     4.7 - - [ ] find a topology that looks good. (maybe nil topology?)
     4.8 - - [ ] fix red part of touch cube in video and image
     4.9 + - [X] find a topology that looks good. (maybe nil topology?)
    4.10 + - [X] fix red part of touch cube in video and image
    4.11   - [ ] write summary of project for Winston            \
    4.12   - [ ] project proposals for Winston                    \
    4.13   - [ ] additional senses to be implemented for Winston   |  -- 2 days 
    4.14 @@ -85,8 +85,6 @@
    4.15   - [X] enable greyscale bitmaps for touch -- 2 hours 
    4.16   - [X] use sawfish to auto-tile sense windows -- 6 hours
    4.17   - [X] sawfish keybinding to automatically delete all sense windows
    4.18 - - [ ] directly change the UV-pixels to show sensor activation -- 2
    4.19 -       days
    4.20   - [ ] proof of concept C sense manipulation  -- 2 days 
    4.21   - [ ] proof of concept GPU sense manipulation -- week
    4.22   - [ ] fourier view of sound -- 2 or 3 days 
    4.23 @@ -98,14 +96,11 @@
    4.24   - [X] usertime/gametime clock HUD display -- day 
    4.25   - [ ] find papers for each of the senses justifying my own
    4.26         representation -- week
    4.27 - - [ ] show sensor maps in HUD display? -- 4 days
    4.28 - - [ ] show sensor maps in AWT display? -- 2 days
    4.29 + - [X] show sensor maps in HUD display? -- 4 days
    4.30 + - [X] show sensor maps in AWT display? -- 2 days
    4.31   - [ ] upgrade to clojure 1.3, replace all defvars with new def
    4.32   - [ ] common video creation code.
    4.33  
    4.34 -
    4.35 -
    4.36 -
    4.37  * jMonkeyEngine ideas
    4.38   - [ ] video showing bullet joints problem
    4.39   - [ ] add mult for Matrix to Ray
     5.1 --- a/org/movement.org	Wed Feb 15 01:36:16 2012 -0700
     5.2 +++ b/org/movement.org	Wed Feb 15 06:56:47 2012 -0700
     5.3 @@ -95,7 +95,7 @@
     5.4             (.getRGB profile x 0))))))))
     5.5  #+end_src
     5.6  
     5.7 -Of note here is =(motor-pool)= which interprets the muscle-profile
     5.8 +Of note here is =motor-pool= which interprets the muscle-profile
     5.9  image in a way that allows me to use gradients between white and red,
    5.10  instead of shades of gray as I've been using for all the other
    5.11  senses. This is purely an aesthetic touch.
    5.12 @@ -137,10 +137,10 @@
    5.13        (movement-kernel creature muscle)))
    5.14  #+end_src
    5.15  
    5.16 -=(movement-kernel)= creates a function that will move the nearest
    5.17 +=movement-kernel= creates a function that will move the nearest
    5.18  physical object to the muscle node. The muscle exerts a rotational
    5.19  force dependant on it's orientation to the object in the blender
    5.20 -file. The function returned by =(movement-kernel)= is also a sense
    5.21 +file. The function returned by =movement-kernel= is also a sense
    5.22  function: it returns the percent of the total muscle strength that is
    5.23  currently being employed. This is analogous to muscle tension in
    5.24  humans and completes the sense of proprioception begun in the last
     6.1 --- a/org/proprioception.org	Wed Feb 15 01:36:16 2012 -0700
     6.2 +++ b/org/proprioception.org	Wed Feb 15 06:56:47 2012 -0700
     6.3 @@ -37,7 +37,7 @@
     6.4  
     6.5  * Helper Functions
     6.6  
     6.7 -=(absolute-angle)= calculates the angle between two vectors, relative to a
     6.8 +=absolute-angle= calculates the angle between two vectors, relative to a
     6.9  third axis vector. This angle is the number of radians you have to
    6.10  move counterclockwise around the axis vector to get from the first to
    6.11  the second vector. It is not commutative like a normal dot-product
    6.12 @@ -82,7 +82,7 @@
    6.13  
    6.14  * Proprioception Kernel
    6.15  
    6.16 -Given a joint, =(proprioception-kernel)= produces a function that
    6.17 +Given a joint, =proprioception-kernel= produces a function that
    6.18  calculates the euler angles between the the objects the joint
    6.19  connects. 
    6.20  
    6.21 @@ -134,9 +134,9 @@
    6.22  #+end_src
    6.23  
    6.24  
    6.25 -=(proprioception!)= maps =(proprioception-kernel)= across all the
    6.26 +=proprioception!= maps =proprioception-kernel= across all the
    6.27  joints of the creature. It uses the same list of joints that
    6.28 -=(cortex.body/joints)= uses.
    6.29 +=cortex.body/joints= uses.
    6.30  
    6.31  * Visualizing Proprioception
    6.32  
     7.1 --- a/org/sense.org	Wed Feb 15 01:36:16 2012 -0700
     7.2 +++ b/org/sense.org	Wed Feb 15 06:56:47 2012 -0700
     7.3 @@ -6,12 +6,11 @@
     7.4  #+SETUPFILE: ../../aurellem/org/setup.org
     7.5  #+INCLUDE: ../../aurellem/org/level-0.org
     7.6  
     7.7 -
     7.8  * Blender Utilities
     7.9  In blender, any object can be assigned an arbitray number of key-value
    7.10  pairs which are called "Custom Properties". These are accessable in
    7.11  jMonkyeEngine when blender files are imported with the
    7.12 -=BlenderLoader=. =(meta-data)= extracts these properties.
    7.13 +=BlenderLoader=. =meta-data= extracts these properties.
    7.14  
    7.15  #+name: blender-1
    7.16  #+begin_src clojure
    7.17 @@ -108,10 +107,10 @@
    7.18  [[../images/finger-3.png]]
    7.19  
    7.20  The following code loads images and gets the locations of the white
    7.21 -pixels so that they can be used to create senses. =(load-image)= finds
    7.22 +pixels so that they can be used to create senses. =load-image= finds
    7.23  images using jMonkeyEngine's asset-manager, so the image path is
    7.24  expected to be relative to the =assets= directory.  Thanks to Dylan
    7.25 -for the beautiful version of =(filter-pixels)=.
    7.26 +for the beautiful version of =filter-pixels=.
    7.27  
    7.28  #+name: topology-1
    7.29  #+begin_src clojure
    7.30 @@ -159,7 +158,7 @@
    7.31  axons, whether it be the optic nerve or the spinal cord. While these
    7.32  bundles more or less perserve the overall topology of a sense's
    7.33  two-dimensional surface, they do not perserve the percise euclidean
    7.34 -distances between every sensor. =(collapse)= is here to smoosh the
    7.35 +distances between every sensor. =collapse= is here to smoosh the
    7.36  sensors described by a UV-map into a contigous region that still
    7.37  perserves the topology of the original sense.
    7.38  
    7.39 @@ -235,10 +234,10 @@
    7.40  * Viewing Sense Data
    7.41  
    7.42  It's vital to /see/ the sense data to make sure that everything is
    7.43 -behaving as it should. =(view-sense)= and its helper, =(view-image)=
    7.44 +behaving as it should. =view-sense= and its helper, =view-image=
    7.45  are here so that each sense can define its own way of turning
    7.46  sense-data into pictures, while the actual rendering of said pictures
    7.47 -stays in one central place.  =(points->image)= helps senses generate a
    7.48 +stays in one central place.  =points->image= helps senses generate a
    7.49  base image onto which they can overlay actual sense data.
    7.50  
    7.51  #+name: view-senses
    7.52 @@ -351,7 +350,7 @@
    7.53     to the sense, including a UV-map describing the number/distribution
    7.54     of sensors if applicipable.
    7.55   - Make each empty-node the child of the top-level
    7.56 -   node. =(sense-nodes)= below generates functions to find these children.
    7.57 +   node. =sense-nodes= below generates functions to find these children.
    7.58  
    7.59  For touch, store the path to the UV-map which describes touch-sensors in the
    7.60  meta-data of the object to which that map applies.
    7.61 @@ -363,7 +362,7 @@
    7.62  Empty nodes created in blender have no appearance or physical presence
    7.63  in jMonkeyEngine, but do appear in the scene graph. Empty nodes that
    7.64  represent a sense which "follows" another geometry (like eyes and
    7.65 -ears) follow the closest physical object.  =(closest-node)= finds this
    7.66 +ears) follow the closest physical object.  =closest-node= finds this
    7.67  closest object given the Creature and a particular empty node.
    7.68  
    7.69  #+name: node-1
    7.70 @@ -409,7 +408,7 @@
    7.71  
    7.72  ** Sense Binding
    7.73  
    7.74 -=(bind-sense)= binds either a Camera or a Listener object to any
    7.75 +=bind-sense= binds either a Camera or a Listener object to any
    7.76  object so that they will follow that object no matter how it
    7.77  moves. It is used to create both eyes and ears.
    7.78  
    7.79 @@ -442,7 +441,7 @@
    7.80  #+end_src
    7.81  
    7.82  Here is some example code which shows how a camera bound to a blue box
    7.83 -with =(bind-sense)= moves as the box is buffeted by white cannonballs.
    7.84 +with =bind-sense= moves as the box is buffeted by white cannonballs.
    7.85  
    7.86  #+name: test
    7.87  #+begin_src clojure 
     8.1 --- a/org/touch.org	Wed Feb 15 01:36:16 2012 -0700
     8.2 +++ b/org/touch.org	Wed Feb 15 06:56:47 2012 -0700
     8.3 @@ -85,19 +85,19 @@
     8.4  physics collision detection to gather tactile data.
     8.5  
     8.6  Extracting the meta-data has already been described. The third step,
     8.7 -physics collision detection, is handled in =(touch-kernel)=. 
     8.8 +physics collision detection, is handled in =touch-kernel=. 
     8.9  Translating the positions and orientations of the feelers from the
    8.10  UV-map to world-space is itself a three-step process.
    8.11  
    8.12    - Find the triangles which make up the mesh in pixel-space and in
    8.13 -    world-space. =(triangles)= =(pixel-triangles)=.
    8.14 +    world-space. =triangles= =pixel-triangles=.
    8.15  
    8.16    - Find the coordinates of each feeler in world-space. These are the
    8.17 -    origins of the feelers. =(feeler-origins)=.
    8.18 +    origins of the feelers. =feeler-origins=.
    8.19      
    8.20    - Calculate the normals of the triangles in world space, and add
    8.21      them to each of the origins of the feelers. These are the
    8.22 -    normalized coordinates of the tips of the feelers. =(feeler-tips)=.
    8.23 +    normalized coordinates of the tips of the feelers. =feeler-tips=.
    8.24  
    8.25  * Triangle Math
    8.26  ** Schrapnel Conversion Functions
    8.27 @@ -123,8 +123,8 @@
    8.28  
    8.29  It is convienent to treat a =Triangle= as a vector of vectors, and a
    8.30  =Vector2f= or =Vector3f= as vectors of floats. (->vector3f) and
    8.31 -(->triangle) undo the operations of =(vector3f-seq)= and
    8.32 -=(triangle-seq)=. If these classes implemented =Iterable= then =(seq)=
    8.33 +(->triangle) undo the operations of =vector3f-seq= and
    8.34 +=triangle-seq=. If these classes implemented =Iterable= then =seq=
    8.35  would work on them automitacally.
    8.36  
    8.37  ** Decomposing a 3D shape into Triangles
    8.38 @@ -136,8 +136,8 @@
    8.39  A =Mesh= is composed of =Triangles=, and each =Triangle= has three
    8.40  verticies which have coordinates in world space and UV space.
    8.41   
    8.42 -Here, =(triangles)= gets all the world-space triangles which compose a
    8.43 -mesh, while =(pixel-triangles)= gets those same triangles expressed in
    8.44 +Here, =triangles= gets all the world-space triangles which compose a
    8.45 +mesh, while =pixel-triangles= gets those same triangles expressed in
    8.46  pixel coordinates (which are UV coordinates scaled to fit the height
    8.47  and width of the UV image).
    8.48  
    8.49 @@ -195,8 +195,8 @@
    8.50  #+end_src
    8.51  ** The Affine Transform from one Triangle to Another
    8.52  
    8.53 -=(pixel-triangles)= gives us the mesh triangles expressed in pixel
    8.54 -coordinates and =(triangles)= gives us the mesh triangles expressed in
    8.55 +=pixel-triangles= gives us the mesh triangles expressed in pixel
    8.56 +coordinates and =triangles= gives us the mesh triangles expressed in
    8.57  world coordinates. The tactile-sensor-profile gives the position of
    8.58  each feeler in pixel-space. In order to convert pixel-space
    8.59  coordinates into world-space coordinates we need something that takes
    8.60 @@ -263,11 +263,11 @@
    8.61  For efficiency's sake I will divide the tactile-profile image into
    8.62  small squares which inscribe each pixel-triangle, then extract the
    8.63  points which lie inside the triangle and map them to 3D-space using
    8.64 -=(triangle-transform)= above. To do this I need a function,
    8.65 -=(convex-bounds)= which finds the smallest box which inscribes a 2D
    8.66 +=triangle-transform= above. To do this I need a function,
    8.67 +=convex-bounds= which finds the smallest box which inscribes a 2D
    8.68  triangle.
    8.69  
    8.70 -=(inside-triangle?)= determines whether a point is inside a triangle
    8.71 +=inside-triangle?= determines whether a point is inside a triangle
    8.72  in 2D pixel-space.
    8.73  
    8.74  #+name: triangles-4
    8.75 @@ -369,9 +369,9 @@
    8.76  #+end_src
    8.77  * Simulated Touch
    8.78  
    8.79 -=(touch-kernel)= generates functions to be called from within a
    8.80 +=touch-kernel= generates functions to be called from within a
    8.81  simulation that perform the necessary physics collisions to collect
    8.82 -tactile data, and =(touch!)= recursively applies it to every node in
    8.83 +tactile data, and =touch!= recursively applies it to every node in
    8.84  the creature. 
    8.85  
    8.86  #+name: kernel
     9.1 --- a/org/vision.org	Wed Feb 15 01:36:16 2012 -0700
     9.2 +++ b/org/vision.org	Wed Feb 15 06:56:47 2012 -0700
     9.3 @@ -7,12 +7,6 @@
     9.4  #+INCLUDE: ../../aurellem/org/level-0.org
     9.5  #+babel: :mkdirp yes :noweb yes :exports both
     9.6  
     9.7 -# SUGGEST: Call functions by their name, without
     9.8 -# parentheses. e.g. =add-eye!=, not =(add-eye!)=. The reason for this
     9.9 -# is that it is potentially easy to confuse the /function/ =f= with its
    9.10 -# /value/ at a particular point =(f x)=. Mathematicians have this
    9.11 -# problem with their notation; we don't need it in ours.
    9.12 -
    9.13  * JMonkeyEngine natively supports multiple views of the same world.
    9.14   
    9.15  Vision is one of the most important senses for humans, so I need to
    9.16 @@ -100,7 +94,7 @@
    9.17      (cleanup []))))
    9.18  #+end_src
    9.19  
    9.20 -The continuation function given to =(vision-pipeline)= above will be
    9.21 +The continuation function given to =vision-pipeline= above will be
    9.22  given a =Renderer= and three containers for image data. The
    9.23  =FrameBuffer= references the GPU image data, but the pixel data can
    9.24  not be used directly on the CPU.  The =ByteBuffer= and =BufferedImage=
    9.25 @@ -133,7 +127,7 @@
    9.26  
    9.27  Note that it is possible to write vision processing algorithms
    9.28  entirely in terms of =BufferedImage= inputs. Just compose that
    9.29 -=BufferedImage= algorithm with =(BufferedImage!)=. However, a vision
    9.30 +=BufferedImage= algorithm with =BufferedImage!=. However, a vision
    9.31  processing algorithm that is entirely hosted on the GPU does not have
    9.32  to pay for this convienence.
    9.33  
    9.34 @@ -147,7 +141,7 @@
    9.35  system determines the orientation of the resulting eye. All eyes are
    9.36  childern of a parent node named "eyes" just as all joints have a
    9.37  parent named "joints". An eye binds to the nearest physical object
    9.38 -with =(bind-sense=).
    9.39 +with =bind-sense=.
    9.40  
    9.41  #+name: add-eye
    9.42  #+begin_src clojure
    9.43 @@ -176,7 +170,7 @@
    9.44  #+end_src
    9.45  
    9.46  Here, the camera is created based on metadata on the eye-node and
    9.47 -attached to the nearest physical object with =(bind-sense)=
    9.48 +attached to the nearest physical object with =bind-sense=
    9.49  ** The Retina
    9.50  
    9.51  An eye is a surface (the retina) which contains many discrete sensors
    9.52 @@ -241,8 +235,8 @@
    9.53  
    9.54  ** Metadata Processing
    9.55  
    9.56 -=(retina-sensor-profile)= extracts a map from the eye-node in the same
    9.57 -format as the example maps above.  =(eye-dimensions)= finds the
    9.58 +=retina-sensor-profile= extracts a map from the eye-node in the same
    9.59 +format as the example maps above.  =eye-dimensions= finds the
    9.60  dimensions of the smallest image required to contain all the retinal
    9.61  sensor maps.
    9.62  
    9.63 @@ -281,7 +275,7 @@
    9.64    "Return the children of the creature's \"eyes\" node.")
    9.65  #+end_src
    9.66  
    9.67 -Then, add the camera created by =(add-eye!)= to the simulation by
    9.68 +Then, add the camera created by =add-eye!= to the simulation by
    9.69  creating a new viewport.
    9.70  
    9.71  #+name: add-camera
    9.72 @@ -307,7 +301,7 @@
    9.73  appropriate pixels from the rendered image and weight them by each
    9.74  sensor's sensitivity. I have the option to do this processing in
    9.75  native code for a slight gain in speed. I could also do it in the GPU
    9.76 -for a massive gain in speed. =(vision-kernel)= generates a list of
    9.77 +for a massive gain in speed. =vision-kernel= generates a list of
    9.78  such continuation functions, one for each channel of the eye.
    9.79  
    9.80  #+name: kernel
    9.81 @@ -384,22 +378,22 @@
    9.82       (add-camera! world (.getCamera world) no-op))))
    9.83  #+end_src
    9.84  
    9.85 -Note that since each of the functions generated by =(vision-kernel)=
    9.86 -shares the same =(register-eye!)= function, the eye will be registered
    9.87 +Note that since each of the functions generated by =vision-kernel=
    9.88 +shares the same =register-eye!= function, the eye will be registered
    9.89  only once the first time any of the functions from the list returned
    9.90 -by =(vision-kernel)= is called.  Each of the functions returned by
    9.91 -=(vision-kernel)= also allows access to the =Viewport= through which
    9.92 +by =vision-kernel= is called.  Each of the functions returned by
    9.93 +=vision-kernel= also allows access to the =Viewport= through which
    9.94  it recieves images.
    9.95  
    9.96  The in-game display can be disrupted by all the viewports that the
    9.97 -functions greated by =(vision-kernel)= add. This doesn't affect the
    9.98 +functions greated by =vision-kernel= add. This doesn't affect the
    9.99  simulation or the simulated senses, but can be annoying.
   9.100 -=(gen-fix-display)= restores the in-simulation display.
   9.101 +=gen-fix-display= restores the in-simulation display.
   9.102  
   9.103  ** The =vision!= function creates sensory probes.
   9.104  
   9.105  All the hard work has been done; all that remains is to apply
   9.106 -=(vision-kernel)= to each eye in the creature and gather the results
   9.107 +=vision-kernel= to each eye in the creature and gather the results
   9.108  into one list of functions.
   9.109  
   9.110  #+name: main
   9.111 @@ -417,7 +411,7 @@
   9.112  ** Displaying visual data for debugging.
   9.113  # Visualization of Vision. Maybe less alliteration would be better.
   9.114  It's vital to have a visual representation for each sense. Here I use
   9.115 -=(view-sense)= to construct a function that will create a display for
   9.116 +=view-sense= to construct a function that will create a display for
   9.117  visual data.
   9.118  
   9.119  #+name: display
   9.120 @@ -677,8 +671,6 @@
   9.121    - As a neat bonus, this idea behind simulated vision also enables one
   9.122      to [[../../cortex/html/capture-video.html][capture live video feeds from jMonkeyEngine]].
   9.123    - Now that we have vision, it's time to tackle [[./hearing.org][hearing]].
   9.124 -
   9.125 -
   9.126  #+appendix
   9.127  
   9.128  * Headers
   9.129 @@ -731,6 +723,9 @@
   9.130    - [[http://hg.bortreb.com ][source-repository]]
   9.131   
   9.132  
   9.133 +* Next 
   9.134 +I find some [[./hearing.org][ears]] for the creature while exploring the guts of
   9.135 +jMonkeyEngine's sound system.
   9.136  
   9.137  * COMMENT Generate Source
   9.138  #+begin_src clojure :tangle ../src/cortex/vision.clj