comparison org/touch.org @ 248:267add63b168

minor fixes
author Robert McIntyre <rlm@mit.edu>
date Sun, 12 Feb 2012 16:00:31 -0700
parents 4e220c8fb1ed
children 95a9f6f1cb82
comparison
equal deleted inserted replaced
247:4e220c8fb1ed 248:267add63b168
9 * Touch 9 * Touch
10 10
11 Touch is critical to navigation and spatial reasoning and as such I 11 Touch is critical to navigation and spatial reasoning and as such I
12 need a simulated version of it to give to my AI creatures. 12 need a simulated version of it to give to my AI creatures.
13 13
14 However, touch in my virtual can not exactly correspond to human touch
15 because my creatures are made out of completely rigid segments that
16 don't deform like human skin.
17
18 Human skin has a wide array of touch sensors, each of which speciliaze 14 Human skin has a wide array of touch sensors, each of which speciliaze
19 in detecting different vibrational modes and pressures. These sensors 15 in detecting different vibrational modes and pressures. These sensors
20 can integrate a vast expanse of skin (i.e. your entire palm), or a 16 can integrate a vast expanse of skin (i.e. your entire palm), or a
21 tiny patch of skin at the tip of your finger. The hairs of the skin 17 tiny patch of skin at the tip of your finger. The hairs of the skin
22 help detect objects before they even come into contact with the skin 18 help detect objects before they even come into contact with the skin
23 proper. 19 proper.
24 20
21 However, touch in my simulated world can not exactly correspond to
22 human touch because my creatures are made out of completely rigid
23 segments that don't deform like human skin.
24
25 Instead of measuring deformation or vibration, I surround each rigid 25 Instead of measuring deformation or vibration, I surround each rigid
26 part with a plenitude of hair-like objects (/feelers/) which do not 26 part with a plenitude of hair-like objects (/feelers/) which do not
27 interact with the physical world. Physical objects can pass through 27 interact with the physical world. Physical objects can pass through
28 them with no effect. The feelers are able to measure contact with 28 them with no effect. The feelers are able to tell when other objects
29 other objects, and constantly report how much of their extent is 29 pass through them, and they constantly report how much of their extent
30 covered. So, even though the creature's body parts do not deform, the 30 is covered. So even though the creature's body parts do not deform,
31 feelers create a margin around those body parts which achieves a sense 31 the feelers create a margin around those body parts which achieves a
32 of touch which is a hybrid between a human's sense of deformation and 32 sense of touch which is a hybrid between a human's sense of
33 sense from hairs. 33 deformation and sense from hairs.
34 34
35 Implementing touch in jMonkeyEngine follows a different techinal route 35 Implementing touch in jMonkeyEngine follows a different techinal route
36 than vision and hearing. Those two senses piggybacked off 36 than vision and hearing. Those two senses piggybacked off
37 jMonkeyEngine's 3D audio and video rendering subsystems. To simulate 37 jMonkeyEngine's 3D audio and video rendering subsystems. To simulate
38 touch, I use jMonkeyEngine's physics system to execute many small 38 touch, I use jMonkeyEngine's physics system to execute many small
78 To simulate touch there are three conceptual steps. For each solid 78 To simulate touch there are three conceptual steps. For each solid
79 object in the creature, you first have to get UV image and scale 79 object in the creature, you first have to get UV image and scale
80 paramater which define the position and length of the feelers. Then, 80 paramater which define the position and length of the feelers. Then,
81 you use the triangles which compose the mesh and the UV data stored in 81 you use the triangles which compose the mesh and the UV data stored in
82 the mesh to determine the world-space position and orientation of each 82 the mesh to determine the world-space position and orientation of each
83 feeler. Once every frame, update these positions and orientations to 83 feeler. Then once every frame, update these positions and orientations
84 match the current position and orientation of the object, and use 84 to match the current position and orientation of the object, and use
85 physics collision detection to gather tactile data. 85 physics collision detection to gather tactile data.
86 86
87 Extracting the meta-data has already been described. The third step, 87 Extracting the meta-data has already been described. The third step,
88 physics collision detection, is handled in =(touch-kernel)=. 88 physics collision detection, is handled in =(touch-kernel)=.
89 Translating the positions and orientations of the feelers from the 89 Translating the positions and orientations of the feelers from the
90 UV-map to world-space is also a three-step process. 90 UV-map to world-space is itself a three-step process.
91 91
92 - Find the triangles which make up the mesh in pixel-space and in 92 - Find the triangles which make up the mesh in pixel-space and in
93 world-space. =(triangles)= =(pixel-triangles)=. 93 world-space. =(triangles)= =(pixel-triangles)=.
94 94
95 - Find the coordinates of each feeler in world-space. These are the 95 - Find the coordinates of each feeler in world-space. These are the
119 119
120 (defn ->triangle [points] 120 (defn ->triangle [points]
121 (apply #(Triangle. %1 %2 %3) (map ->vector3f points))) 121 (apply #(Triangle. %1 %2 %3) (map ->vector3f points)))
122 #+end_src 122 #+end_src
123 123
124 It is convienent to treat a =Triangle= as a sequence of verticies, and 124 It is convienent to treat a =Triangle= as a vector of vectors, and a
125 a =Vector2f= and =Vector3f= as a sequence of floats. These conversion 125 =Vector2f= and =Vector3f= as vectors of floats. (->vector3f) and
126 functions make this easy. If these classes implemented =Iterable= then 126 (->triangle) undo the operations of =(vector3f-seq)= and
127 =(seq)= would work on them automitacally. 127 =(triangle-seq)=. If these classes implemented =Iterable= then =(seq)=
128 would work on them automitacally.
129
128 ** Decomposing a 3D shape into Triangles 130 ** Decomposing a 3D shape into Triangles
129 131
130 The rigid bodies which make up a creature have an underlying 132 The rigid objects which make up a creature have an underlying
131 =Geometry=, which is a =Mesh= plus a =Material= and other important 133 =Geometry=, which is a =Mesh= plus a =Material= and other important
132 data involved with displaying the body. 134 data involved with displaying the object.
133 135
134 A =Mesh= is composed of =Triangles=, and each =Triangle= has three 136 A =Mesh= is composed of =Triangles=, and each =Triangle= has three
135 verticies which have coordinates in world space and UV space. 137 verticies which have coordinates in world space and UV space.
136 138
137 Here, =(triangles)= gets all the world-space triangles which compose a 139 Here, =(triangles)= gets all the world-space triangles which compose a
180 height (.getHeight image)] 182 height (.getHeight image)]
181 (vec (map (fn [[u v]] (vector (* width u) (* height v))) 183 (vec (map (fn [[u v]] (vector (* width u) (* height v)))
182 (map (partial vertex-UV-coord mesh) 184 (map (partial vertex-UV-coord mesh)
183 (triangle-vertex-indices mesh index)))))) 185 (triangle-vertex-indices mesh index))))))
184 186
185 (defn pixel-triangles [#^Geometry geo image] 187 (defn pixel-triangles
186 (let [height (.getHeight image) 188 "The pixel-space triangles of the Geometry, in the same order as
187 width (.getWidth image)] 189 (triangles geo)"
188 (map (partial pixel-triangle geo image) 190 [#^Geometry geo image]
189 (range (.getTriangleCount (.getMesh geo)))))) 191 (let [height (.getHeight image)
192 width (.getWidth image)]
193 (map (partial pixel-triangle geo image)
194 (range (.getTriangleCount (.getMesh geo))))))
190 #+end_src 195 #+end_src
191 ** The Affine Transform from one Triangle to Another 196 ** The Affine Transform from one Triangle to Another
192 197
193 =(pixel-triangles)= gives us the mesh triangles expressed in pixel 198 =(pixel-triangles)= gives us the mesh triangles expressed in pixel
194 coordinates and =(triangles)= gives us the mesh triangles expressed in 199 coordinates and =(triangles)= gives us the mesh triangles expressed in
195 world coordinates. The tactile-sensor-profile gives the position of 200 world coordinates. The tactile-sensor-profile gives the position of
196 each feeler in pixel-space. In order to convert pixel-dpace 201 each feeler in pixel-space. In order to convert pixel-space
197 coordinates into world-space coordinates we need something that takes 202 coordinates into world-space coordinates we need something that takes
198 coordinates on the surface of one triangle and gives the corresponding 203 coordinates on the surface of one triangle and gives the corresponding
199 coordinates on the surface of another triangle. 204 coordinates on the surface of another triangle.
200 205
201 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed into 206 Triangles are [[http://mathworld.wolfram.com/AffineTransformation.html ][affine]], which means any triangle can be transformed into
202 any other by a combination of translation, scaling, and 207 any other by a combination of translation, scaling, and
203 rotation. jMonkeyEngine's =Matrix4f= objects can describe any affine 208 rotation. The affine transformation from one triangle to another
204 transformation. The affine transformation from one triangle to another
205 is readily computable if the triangle is expressed in terms of a $4x4$ 209 is readily computable if the triangle is expressed in terms of a $4x4$
206 matrix. 210 matrix.
207 211
208 \begin{bmatrix} 212 \begin{bmatrix}
209 x_1 & x_2 & x_3 & n_x \\ 213 x_1 & x_2 & x_3 & n_x \\
219 With two triangles $T_{1}$ and $T_{2}$ each expressed as a matrix like 223 With two triangles $T_{1}$ and $T_{2}$ each expressed as a matrix like
220 above, the affine transform from $T_{1}$ to $T_{2}$ is 224 above, the affine transform from $T_{1}$ to $T_{2}$ is
221 225
222 $T_{2}T_{1}^{-1}$ 226 $T_{2}T_{1}^{-1}$
223 227
224 The clojure code below recaptiulates the formulas above. 228 The clojure code below recaptiulates the formulas above, using
229 jMonkeyEngine's =Matrix4f= objects, which can describe any affine
230 transformation.
225 231
226 #+name: triangles-3 232 #+name: triangles-3
227 #+begin_src clojure 233 #+begin_src clojure
228 (in-ns 'cortex.touch) 234 (in-ns 'cortex.touch)
229 235