Mercurial > cortex
changeset 468:258078f78b33
insert argument about senses.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Fri, 28 Mar 2014 16:25:31 -0400 |
parents | ade64947d2bf |
children | ae10f35022ba |
files | org/sense.org thesis/cortex.org thesis/images/empty-sense-nodes.png thesis/images/finger-1.png |
diffstat | 4 files changed, 163 insertions(+), 55 deletions(-) [+] |
line wrap: on
line diff
1.1 --- a/org/sense.org Fri Mar 28 15:30:23 2014 -0400 1.2 +++ b/org/sense.org Fri Mar 28 16:25:31 2014 -0400 1.3 @@ -7,9 +7,9 @@ 1.4 #+INCLUDE: ../../aurellem/org/level-0.org 1.5 1.6 * Blender Utilities 1.7 -In blender, any object can be assigned an arbitrary number of key-value 1.8 -pairs which are called "Custom Properties". These are accessible in 1.9 -jMonkeyEngine when blender files are imported with the 1.10 +In blender, any object can be assigned an arbitrary number of 1.11 +key-value pairs which are called "Custom Properties". These are 1.12 +accessible in jMonkeyEngine when blender files are imported with the 1.13 =BlenderLoader=. =meta-data= extracts these properties. 1.14 1.15 #+name: blender-1 1.16 @@ -85,7 +85,7 @@ 1.17 called [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Textures/Mapping/UV][UV-mapping]]. The three-dimensional surface of a model is cut 1.18 and smooshed until it fits on a two-dimensional image. You paint 1.19 whatever you want on that image, and when the three-dimensional shape 1.20 -is rendered in a game the smooshing and cutting us reversed and the 1.21 +is rendered in a game the smooshing and cutting is reversed and the 1.22 image appears on the three-dimensional object. 1.23 1.24 To make a sense, interpret the UV-image as describing the distribution
2.1 --- a/thesis/cortex.org Fri Mar 28 15:30:23 2014 -0400 2.2 +++ b/thesis/cortex.org Fri Mar 28 16:25:31 2014 -0400 2.3 @@ -366,9 +366,9 @@ 2.4 is a vast and rich place, and for now simulations are a very poor 2.5 reflection of its complexity. It may be that there is a significant 2.6 qualatative difference between dealing with senses in the real 2.7 - world and dealing with pale facilimilies of them in a 2.8 - simulation. What are the advantages and disadvantages of a 2.9 - simulation vs. reality? 2.10 + world and dealing with pale facilimilies of them in a simulation. 2.11 + What are the advantages and disadvantages of a simulation vs. 2.12 + reality? 2.13 2.14 *** Simulation 2.15 2.16 @@ -445,6 +445,85 @@ 2.17 simulations of very simple creatures in =CORTEX= generally run at 2.18 40x on my machine! 2.19 2.20 +** What is a sense? 2.21 + 2.22 + If =CORTEX= is to support a wide variety of senses, it would help 2.23 + to have a better understanding of what a ``sense'' actually is! 2.24 + While vision, touch, and hearing all seem like they are quite 2.25 + different things, I was supprised to learn during the course of 2.26 + this thesis that they (and all physical senses) can be expressed as 2.27 + exactly the same mathematical object due to a dimensional argument! 2.28 + 2.29 + Human beings are three-dimensional objects, and the nerves that 2.30 + transmit data from our various sense organs to our brain are 2.31 + essentially one-dimensional. This leaves up to two dimensions in 2.32 + which our sensory information may flow. For example, imagine your 2.33 + skin: it is a two-dimensional surface around a three-dimensional 2.34 + object (your body). It has discrete touch sensors embedded at 2.35 + various points, and the density of these sensors corresponds to the 2.36 + sensitivity of that region of skin. Each touch sensor connects to a 2.37 + nerve, all of which eventually are bundled together as they travel 2.38 + up the spinal cord to the brain. Intersect the spinal nerves with a 2.39 + guillotining plane and you will see all of the sensory data of the 2.40 + skin revealed in a roughly circular two-dimensional image which is 2.41 + the cross section of the spinal cord. Points on this image that are 2.42 + close together in this circle represent touch sensors that are 2.43 + /probably/ close together on the skin, although there is of course 2.44 + some cutting and rearrangement that has to be done to transfer the 2.45 + complicated surface of the skin onto a two dimensional image. 2.46 + 2.47 + Most human senses consist of many discrete sensors of various 2.48 + properties distributed along a surface at various densities. For 2.49 + skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's 2.50 + disks, and Ruffini's endings, which detect pressure and vibration 2.51 + of various intensities. For ears, it is the stereocilia distributed 2.52 + along the basilar membrane inside the cochlea; each one is 2.53 + sensitive to a slightly different frequency of sound. For eyes, it 2.54 + is rods and cones distributed along the surface of the retina. In 2.55 + each case, we can describe the sense with a surface and a 2.56 + distribution of sensors along that surface. 2.57 + 2.58 + The neat idea is that every human sense can be effectively 2.59 + described in terms of a surface containing embedded sensors. If the 2.60 + sense had any more dimensions, then there wouldn't be enough room 2.61 + in the spinal chord to transmit the information! 2.62 + 2.63 + Therefore, =CORTEX= must support the ability to create objects and 2.64 + then be able to ``paint'' points along their surfaces to describe 2.65 + each sense. 2.66 + 2.67 + Fortunately this idea is already a well known computer graphics 2.68 + technique called called /UV-mapping/. The three-dimensional surface 2.69 + of a model is cut and smooshed until it fits on a two-dimensional 2.70 + image. You paint whatever you want on that image, and when the 2.71 + three-dimensional shape is rendered in a game the smooshing and 2.72 + cutting is reversed and the image appears on the three-dimensional 2.73 + object. 2.74 + 2.75 + To make a sense, interpret the UV-image as describing the 2.76 + distribution of that senses sensors. To get different types of 2.77 + sensors, you can either use a different color for each type of 2.78 + sensor, or use multiple UV-maps, each labeled with that sensor 2.79 + type. I generally use a white pixel to mean the presence of a 2.80 + sensor and a black pixel to mean the absence of a sensor, and use 2.81 + one UV-map for each sensor-type within a given sense. 2.82 + 2.83 + #+CAPTION: The UV-map for an elongated icososphere. The white 2.84 + #+caption: dots each represent a touch sensor. They are dense 2.85 + #+caption: in the regions that describe the tip of the finger, 2.86 + #+caption: and less dense along the dorsal side of the finger 2.87 + #+caption: opposite the tip. 2.88 + #+name: finger-UV 2.89 + #+ATTR_latex: :width 10cm 2.90 + [[./images/finger-UV.png]] 2.91 + 2.92 + #+caption: Ventral side of the UV-mapped finger. Notice the 2.93 + #+caption: density of touch sensors at the tip. 2.94 + #+name: finger-side-view 2.95 + #+ATTR_LaTeX: :width 10cm 2.96 + [[./images/finger-1.png]] 2.97 + 2.98 + 2.99 ** COMMENT Video game engines are a great starting point 2.100 2.101 I did not need to write my own physics simulation code or shader to 2.102 @@ -462,11 +541,12 @@ 2.103 game engine will allow you to efficiently create multiple cameras 2.104 in the simulated world that can be used as eyes. Video game systems 2.105 offer integrated asset management for things like textures and 2.106 - creatures models, providing an avenue for defining creatures. 2.107 - Finally, because video game engines support a large number of 2.108 - users, if I don't stray too far from the base system, other 2.109 - researchers can turn to this community for help when doing their 2.110 - research. 2.111 + creatures models, providing an avenue for defining creatures. They 2.112 + also understand UV-mapping, since this technique is used to apply a 2.113 + texture to a model. Finally, because video game engines support a 2.114 + large number of users, as long as =CORTEX= doesn't stray too far 2.115 + from the base system, other researchers can turn to this community 2.116 + for help when doing their research. 2.117 2.118 ** COMMENT =CORTEX= is based on jMonkeyEngine3 2.119 2.120 @@ -510,43 +590,83 @@ 2.121 write my code in clojure, an implementation of =LISP= that runs on 2.122 the JVM. 2.123 2.124 -** COMMENT Bodies are composed of segments connected by joints 2.125 +** =CORTEX= uses Blender to create creature models 2.126 2.127 For the simple worm-like creatures I will use later on in this 2.128 thesis, I could define a simple API in =CORTEX= that would allow 2.129 one to create boxes, spheres, etc., and leave that API as the sole 2.130 way to create creatures. However, for =CORTEX= to truly be useful 2.131 - for other projects, it needs to have a way to construct complicated 2.132 + for other projects, it needs a way to construct complicated 2.133 creatures. If possible, it would be nice to leverage work that has 2.134 already been done by the community of 3D modelers, or at least 2.135 enable people who are talented at moedling but not programming to 2.136 - design =CORTEX= creatures. 2.137 + design =CORTEX= creatures. 2.138 2.139 Therefore, I use Blender, a free 3D modeling program, as the main 2.140 way to create creatures in =CORTEX=. However, the creatures modeled 2.141 in Blender must also be simple to simulate in jMonkeyEngine3's game 2.142 - engine, and must also be easy to rig with =CORTEX='s senses. 2.143 - 2.144 - While trying to find a good compromise for body-design, one option 2.145 - I ultimately rejected is to use blender's [[http://wiki.blender.org/index.php/Doc:2.6/Manual/Rigging/Armatures][armature]] system. The idea 2.146 - would have been to define a mesh which describes the creature's 2.147 - entire body. To this you add a skeleton which deforms this mesh 2.148 - (called rigging). This technique is used extensively to model 2.149 - humans and create realistic animations. It is not a good technique 2.150 - for physical simulation, because deformable surfaces are hard to 2.151 - model. Humans work like a squishy bag with some hard bones to give 2.152 - it shape. The bones are easy to simulate physically, but they 2.153 - interact with thr world though the skin, which is contiguous, but 2.154 - does not have a constant shape. In order to simulate skin you need 2.155 - some way to continuously update the physical model of the skin 2.156 - along with the movement of the bones. Given that bullet is 2.157 - optimized for rigid, solid objects, this leads to unmanagable 2.158 - computation and incorrect simulation. 2.159 + engine, and must also be easy to rig with =CORTEX='s senses. I 2.160 + accomplish this with extensive use of Blender's ``empty nodes.'' 2.161 2.162 - Instead of using the human-like ``deformable bag of bones'' 2.163 - approach, I decided to base my body plans on multiple solid objects 2.164 - that are connected by joints, inspired by the robot =EVE= from the 2.165 - movie WALL-E. 2.166 + Empty nodes have no mass, physical presence, or appearance, but 2.167 + they can hold metadata and have names. I use a tree structure of 2.168 + empty nodes to specify senses in the following manner: 2.169 + 2.170 + - Create a single top-level empty node whose name is the name of 2.171 + the sense. 2.172 + - Add empty nodes which each contain meta-data relevant to the 2.173 + sense, including a UV-map describing the number/distribution of 2.174 + sensors if applicable. 2.175 + - Make each empty-node the child of the top-level node. 2.176 + 2.177 + #+caption: An example of annoting a creature model with empty 2.178 + #+caption: nodes to describe the layout of senses. There are 2.179 + #+caption: multiple empty nodes which each describe the position 2.180 + #+caption: of muscles, ears, eyes, or joints. 2.181 + #+name: sense-nodes 2.182 + #+ATTR_LaTeX: :width 10cm 2.183 + [[./images/empty-sense-nodes.png]] 2.184 + 2.185 + 2.186 +** Bodies are composed of segments connected by joints 2.187 + 2.188 + Blender is a general purpose animation tool, which has been used in 2.189 + the past to create high quality movies such as Sintel 2.190 + \cite{sintel}. Though Blender can model and render even complicated 2.191 + things like water, it is crucual to keep models that are meant to 2.192 + be simulated as creatures simple. =Bullet=, which =CORTEX= uses 2.193 + though jMonkeyEngine3, is a rigid-body physics system. This offers 2.194 + a compromise between the expressiveness of a game level and the 2.195 + speed at which it can be simulated, and it means that creatures 2.196 + should be naturally expressed as rigid components held together by 2.197 + joint constraints. 2.198 + 2.199 + But humans are more like a squishy bag with wrapped around some 2.200 + hard bones which define the overall shape. When we move, our skin 2.201 + bends and stretches to accomodate the new positions of our bones. 2.202 + 2.203 + One way to make bodies composed of rigid pieces connected by joints 2.204 + /seem/ more human-like is to use an /armature/, (or /rigging/) 2.205 + system, which defines a overall ``body mesh'' and defines how the 2.206 + mesh deforms as a function of the position of each ``bone'' which 2.207 + is a standard rigid body. This technique is used extensively to 2.208 + model humans and create realistic animations. It is not a good 2.209 + technique for physical simulation, however because it creates a lie 2.210 + -- the skin is not a physical part of the simulation and does not 2.211 + interact with any objects in the world or itself. Objects will pass 2.212 + right though the skin until they come in contact with the 2.213 + underlying bone, which is a physical object. Whithout simulating 2.214 + the skin, the sense of touch has little meaning, and the creature's 2.215 + own vision will lie to it about the true extent of its body. 2.216 + Simulating the skin as a physical object requires some way to 2.217 + continuously update the physical model of the skin along with the 2.218 + movement of the bones, which is unacceptably slow compared to rigid 2.219 + body simulation. 2.220 + 2.221 + Therefore, instead of using the human-like ``deformable bag of 2.222 + bones'' approach, I decided to base my body plans on multiple solid 2.223 + objects that are connected by joints, inspired by the robot =EVE= 2.224 + from the movie WALL-E. 2.225 2.226 #+caption: =EVE= from the movie WALL-E. This body plan turns 2.227 #+caption: out to be much better suited to my purposes than a more 2.228 @@ -558,32 +678,20 @@ 2.229 together by invisible joint constraints. This is what I mean by 2.230 ``eve-like''. The main reason that I use eve-style bodies is for 2.231 efficiency, and so that there will be correspondence between the 2.232 - AI's vision and the physical presence of its body. Each individual 2.233 + AI's semses and the physical presence of its body. Each individual 2.234 section is simulated by a separate rigid body that corresponds 2.235 exactly with its visual representation and does not change. 2.236 Sections are connected by invisible joints that are well supported 2.237 in jMonkeyEngine3. Bullet, the physics backend for jMonkeyEngine3, 2.238 can efficiently simulate hundreds of rigid bodies connected by 2.239 - joints. Sections do not have to stay as one piece forever; they can 2.240 - be dynamically replaced with multiple sections to simulate 2.241 - splitting in two. This could be used to simulate retractable claws 2.242 - or =EVE='s hands, which are able to coalesce into one object in the 2.243 - movie. 2.244 + joints. Just because sections are rigid does not mean they have to 2.245 + stay as one piece forever; they can be dynamically replaced with 2.246 + multiple sections to simulate splitting in two. This could be used 2.247 + to simulate retractable claws or =EVE='s hands, which are able to 2.248 + coalesce into one object in the movie. 2.249 2.250 *** Solidifying/Connecting the body 2.251 2.252 - Importing bodies from =CORTEX= into blender involves encoding 2.253 - metadata into the blender file that specifies the mass of each 2.254 - component and the joints by which those components are connected. I 2.255 - do this in Blender in two ways. First is by using the ``metadata'' 2.256 - field of each solid object to specify the mass. Second is by using 2.257 - Blender ``empty nodes'' to specify the position and type of each 2.258 - joint. Empty nodes have no mass, physical presence, or appearance, 2.259 - but they can hold metadata and have names. I use a tree structure 2.260 - of empty nodes to specify joints. There is a parent node named 2.261 - ``joints'', and a series of empty child nodes of the ``joints'' 2.262 - node that each represent a single joint. 2.263 - 2.264 #+caption: View of the hand model in Blender showing the main ``joints'' 2.265 #+caption: node (highlighted in yellow) and its children which each 2.266 #+caption: represent a joint in the hand. Each joint node has metadata 2.267 @@ -591,7 +699,6 @@ 2.268 #+name: blender-hand 2.269 #+ATTR_LaTeX: :width 10cm 2.270 [[./images/hand-screenshot1.png]] 2.271 - 2.272 2.273 =CORTEX= creates a creature in two steps: first, it traverses the 2.274 nodes in the blender file and creates physical representations for 2.275 @@ -831,6 +938,7 @@ 2.276 [[./images/physical-hand.png]] 2.277 2.278 2.279 + 2.280 ** Eyes reuse standard video game components 2.281 2.282 ** Hearing is hard; =CORTEX= does it right
3.1 Binary file thesis/images/empty-sense-nodes.png has changed
4.1 Binary file thesis/images/finger-1.png has changed