changeset 248:267add63b168

minor fixes
author Robert McIntyre <rlm@mit.edu>
date Sun, 12 Feb 2012 16:00:31 -0700
parents 4e220c8fb1ed
children 95a9f6f1cb82
files org/touch.org
diffstat 1 files changed, 34 insertions(+), 28 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/org/touch.org	Sun Feb 12 15:46:01 2012 -0700
     1.2 +++ b/org/touch.org	Sun Feb 12 16:00:31 2012 -0700
     1.3 @@ -11,10 +11,6 @@
     1.4  Touch is critical to navigation and spatial reasoning and as such I
     1.5  need a simulated version of it to give to my AI creatures.
     1.6  
     1.7 -However, touch in my virtual can not exactly correspond to human touch
     1.8 -because my creatures are made out of completely rigid segments that
     1.9 -don't deform like human skin.
    1.10 -
    1.11  Human skin has a wide array of touch sensors, each of which speciliaze
    1.12  in detecting different vibrational modes and pressures. These sensors
    1.13  can integrate a vast expanse of skin (i.e. your entire palm), or a
    1.14 @@ -22,15 +18,19 @@
    1.15  help detect objects before they even come into contact with the skin
    1.16  proper.
    1.17  
    1.18 +However, touch in my simulated world can not exactly correspond to
    1.19 +human touch because my creatures are made out of completely rigid
    1.20 +segments that don't deform like human skin.
    1.21 +
    1.22  Instead of measuring deformation or vibration, I surround each rigid
    1.23  part with a plenitude of hair-like objects (/feelers/) which do not
    1.24  interact with the physical world. Physical objects can pass through
    1.25 -them with no effect. The feelers are able to measure contact with
    1.26 -other objects, and constantly report how much of their extent is
    1.27 -covered. So, even though the creature's body parts do not deform, the
    1.28 -feelers create a margin around those body parts which achieves a sense
    1.29 -of touch which is a hybrid between a human's sense of deformation and
    1.30 -sense from hairs.
    1.31 +them with no effect. The feelers are able to tell when other objects
    1.32 +pass through them, and they constantly report how much of their extent
    1.33 +is covered. So even though the creature's body parts do not deform,
    1.34 +the feelers create a margin around those body parts which achieves a
    1.35 +sense of touch which is a hybrid between a human's sense of
    1.36 +deformation and sense from hairs.
    1.37  
    1.38  Implementing touch in jMonkeyEngine follows a different techinal route
    1.39  than vision and hearing. Those two senses piggybacked off
    1.40 @@ -80,14 +80,14 @@
    1.41  paramater which define the position and length of the feelers. Then,
    1.42  you use the triangles which compose the mesh and the UV data stored in
    1.43  the mesh to determine the world-space position and orientation of each
    1.44 -feeler. Once every frame, update these positions and orientations to
    1.45 -match the current position and orientation of the object, and use
    1.46 +feeler. Then once every frame, update these positions and orientations
    1.47 +to match the current position and orientation of the object, and use
    1.48  physics collision detection to gather tactile data.
    1.49  
    1.50  Extracting the meta-data has already been described. The third step,
    1.51  physics collision detection, is handled in =(touch-kernel)=. 
    1.52  Translating the positions and orientations of the feelers from the
    1.53 -UV-map to world-space is also a three-step process.
    1.54 +UV-map to world-space is itself a three-step process.
    1.55  
    1.56    - Find the triangles which make up the mesh in pixel-space and in
    1.57      world-space. =(triangles)= =(pixel-triangles)=.
    1.58 @@ -121,15 +121,17 @@
    1.59    (apply #(Triangle. %1 %2 %3) (map ->vector3f points)))
    1.60  #+end_src
    1.61  
    1.62 -It is convienent to treat a =Triangle= as a sequence of verticies, and
    1.63 -a =Vector2f= and =Vector3f= as a sequence of floats. These conversion
    1.64 -functions make this easy. If these classes implemented =Iterable= then
    1.65 -=(seq)= would work on them automitacally.
    1.66 +It is convienent to treat a =Triangle= as a vector of vectors, and a
    1.67 +=Vector2f= and =Vector3f= as vectors of floats. (->vector3f) and
    1.68 +(->triangle) undo the operations of =(vector3f-seq)= and
    1.69 +=(triangle-seq)=. If these classes implemented =Iterable= then =(seq)=
    1.70 +would work on them automitacally.
    1.71 +
    1.72  ** Decomposing a 3D shape into Triangles
    1.73  
    1.74 -The rigid bodies which make up a creature have an underlying
    1.75 +The rigid objects which make up a creature have an underlying
    1.76  =Geometry=, which is a =Mesh= plus a =Material= and other important
    1.77 -data involved with displaying the body.
    1.78 +data involved with displaying the object.
    1.79  
    1.80  A =Mesh= is composed of =Triangles=, and each =Triangle= has three
    1.81  verticies which have coordinates in world space and UV space.
    1.82 @@ -182,26 +184,28 @@
    1.83                (map (partial vertex-UV-coord mesh)
    1.84                     (triangle-vertex-indices mesh index))))))
    1.85  
    1.86 -(defn pixel-triangles [#^Geometry geo image]
    1.87 - (let [height (.getHeight image)
    1.88 -       width (.getWidth image)]
    1.89 -   (map (partial pixel-triangle geo image)
    1.90 -        (range (.getTriangleCount (.getMesh geo))))))
    1.91 +(defn pixel-triangles 
    1.92 +  "The pixel-space triangles of the Geometry, in the same order as
    1.93 +   (triangles geo)"
    1.94 +  [#^Geometry geo image]
    1.95 +  (let [height (.getHeight image)
    1.96 +        width (.getWidth image)]
    1.97 +    (map (partial pixel-triangle geo image)
    1.98 +         (range (.getTriangleCount (.getMesh geo))))))
    1.99  #+end_src
   1.100  ** The Affine Transform from one Triangle to Another
   1.101  
   1.102  =(pixel-triangles)= gives us the mesh triangles expressed in pixel
   1.103  coordinates and =(triangles)= gives us the mesh triangles expressed in
   1.104  world coordinates. The tactile-sensor-profile gives the position of
   1.105 -each feeler in pixel-space. In order to convert pixel-dpace
   1.106 +each feeler in pixel-space. In order to convert pixel-space
   1.107  coordinates into world-space coordinates we need something that takes
   1.108  coordinates on the surface of one triangle and gives the corresponding
   1.109  coordinates on the surface of another triangle.
   1.110  
   1.111  Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed into
   1.112  any other by a combination of translation, scaling, and
   1.113 -rotation. jMonkeyEngine's =Matrix4f= objects can describe any affine
   1.114 -transformation. The affine transformation from one triangle to another
   1.115 +rotation. The affine transformation from one triangle to another
   1.116  is readily computable if the triangle is expressed in terms of a $4x4$
   1.117  matrix.
   1.118  
   1.119 @@ -221,7 +225,9 @@
   1.120  
   1.121  $T_{2}T_{1}^{-1}$
   1.122  
   1.123 -The clojure code below recaptiulates the formulas above.
   1.124 +The clojure code below recaptiulates the formulas above, using
   1.125 +jMonkeyEngine's =Matrix4f= objects, which can describe any affine
   1.126 +transformation.
   1.127  
   1.128  #+name: triangles-3
   1.129  #+begin_src clojure