diff org/intro.org @ 333:f4ef73370da1

add file.
author Robert McIntyre <rlm@mit.edu>
date Fri, 20 Jul 2012 11:21:04 -0500
parents b84701e4f6ae
children bb81cef09ad7
line wrap: on
line diff
     1.1 --- a/org/intro.org	Thu Jul 19 19:38:45 2012 -0500
     1.2 +++ b/org/intro.org	Fri Jul 20 11:21:04 2012 -0500
     1.3 @@ -8,170 +8,193 @@
     1.4  #+babel: :mkdirp yes :noweb yes
     1.5  
     1.6  * Background
     1.7 -Artificial Intelligence has tried and failed for more than half a
     1.8 -century to produce programs as flexible, creative, and "intelligent"
     1.9 -as the human mind itself. Clearly, we are still missing some important
    1.10 -ideas concerning intelligent programs or we would have strong AI
    1.11 -already. What idea could be missing?
    1.12 +
    1.13 +Artificial Intelligence has tried and failed for more than
    1.14 +half a century to produce programs as flexible, creative,
    1.15 +and "intelligent" as the human mind itself. Clearly, we are
    1.16 +still missing some important ideas concerning intelligent
    1.17 +programs or we would have strong AI already. What idea could
    1.18 +be missing?
    1.19  
    1.20  When Turing first proposed his famous "Turing Test" in the
    1.21 -groundbreaking paper [[../sources/turing.pdf][/Computing Machines and Intelligence/]], he gave
    1.22 -little importance to how a computer program might interact with the
    1.23 -world:
    1.24 +groundbreaking paper [[../sources/turing.pdf][/Computing Machines and Intelligence/]],
    1.25 +he gave little importance to how a computer program might
    1.26 +interact with the world:
    1.27  
    1.28  #+BEGIN_QUOTE
    1.29 -\ldquo{}We need not be too concerned about the legs, eyes, etc. The example of
    1.30 -Miss Helen Keller shows that education can take place provided that
    1.31 -communication in both directions between teacher and pupil can take
    1.32 -place by some means or other.\rdquo{}
    1.33 +\ldquo{}We need not be too concerned about the legs, eyes,
    1.34 +etc. The example of Miss Helen Keller shows that education
    1.35 +can take place provided that communication in both
    1.36 +directions between teacher and pupil can take place by some
    1.37 +means or other.\rdquo{}
    1.38  #+END_QUOTE
    1.39  
    1.40 -And from the example of Hellen Keller he went on to assume that the
    1.41 -only thing a fledgling AI program could need by way of communication
    1.42 -is a teletypewriter. But Hellen Keller did possess vision and hearing
    1.43 -for the first few months of her life, and her tactile sense was far
    1.44 -more rich than any text-stream could hope to achieve. She possessed a
    1.45 -body she could move freely, and had continual access to the real world
    1.46 -to learn from her actions.
    1.47 +And from the example of Hellen Keller he went on to assume
    1.48 +that the only thing a fledgling AI program could need by way
    1.49 +of communication is a teletypewriter. But Hellen Keller did
    1.50 +possess vision and hearing for the first few months of her
    1.51 +life, and her tactile sense was far more rich than any
    1.52 +text-stream could hope to achieve. She possessed a body she
    1.53 +could move freely, and had continual access to the real
    1.54 +world to learn from her actions.
    1.55  
    1.56 -I believe that our programs are suffering from too little sensory
    1.57 -input to become really intelligent. Imagine for a moment that you
    1.58 -lived in a world completely cut off form all sensory stimulation. You
    1.59 -have no eyes to see, no ears to hear, no mouth to speak. No body, no
    1.60 -taste, no feeling whatsoever. The only sense you get at all is a
    1.61 -single point of light, flickering on and off in the void. If this was
    1.62 -your life from birth, you would never learn anything, and could never
    1.63 -become intelligent. Actual humans placed in sensory deprivation
    1.64 -chambers experience hallucinations and can begin to loose their sense
    1.65 -of reality. Most of the time, the programs we write are in exactly
    1.66 -this situation. They do not interface with cameras and microphones,
    1.67 -and they do not control a real or simulated body or interact with any
    1.68 -sort of world.
    1.69 +I believe that our programs are suffering from too little
    1.70 +sensory input to become really intelligent. Imagine for a
    1.71 +moment that you lived in a world completely cut off form all
    1.72 +sensory stimulation. You have no eyes to see, no ears to
    1.73 +hear, no mouth to speak. No body, no taste, no feeling
    1.74 +whatsoever. The only sense you get at all is a single point
    1.75 +of light, flickering on and off in the void. If this was
    1.76 +your life from birth, you would never learn anything, and
    1.77 +could never become intelligent. Actual humans placed in
    1.78 +sensory deprivation chambers experience hallucinations and
    1.79 +can begin to loose their sense of reality. Most of the time,
    1.80 +the programs we write are in exactly this situation. They do
    1.81 +not interface with cameras and microphones, and they do not
    1.82 +control a real or simulated body or interact with any sort
    1.83 +of world.
    1.84  
    1.85  * Simulation vs. Reality
    1.86 +
    1.87  I want demonstrate that multiple senses are what enable
    1.88 -intelligence. There are two ways of playing around with senses and
    1.89 -computer programs:
    1.90 -
    1.91 +intelligence. There are two ways of playing around with
    1.92 +senses and computer programs:
    1.93  
    1.94  ** Simulation
    1.95 -The first is to go entirely with simulation: virtual world, virtual
    1.96 -character, virtual senses. The advantages are that when everything is
    1.97 -a simulation, experiments in that simulation are absolutely
    1.98 -reproducible. It's also easier to change the character and world to
    1.99 -explore new situations and different sensory combinations.
   1.100  
   1.101 -If the world is to be simulated on a computer, then not only do you
   1.102 -have to worry about whether the character's senses are rich enough to
   1.103 -learn from the world, but whether the world itself is rendered with
   1.104 -enough detail and realism to give enough working material to the
   1.105 -character's senses. To name just a few difficulties facing modern
   1.106 -physics simulators: destructibility of the environment, simulation of
   1.107 -water/other fluids, large areas, nonrigid bodies, lots of objects,
   1.108 -smoke. I don't know of any computer simulation that would allow a
   1.109 -character to take a rock and grind it into fine dust, then use that
   1.110 -dust to make a clay sculpture, at least not without spending years
   1.111 -calculating the interactions of every single small grain of
   1.112 -dust. Maybe a simulated world with today's limitations doesn't provide
   1.113 +The first is to go entirely with simulation: virtual world,
   1.114 +virtual character, virtual senses. The advantages are that
   1.115 +when everything is a simulation, experiments in that
   1.116 +simulation are absolutely reproducible. It's also easier to
   1.117 +change the character and world to explore new situations and
   1.118 +different sensory combinations.
   1.119 +
   1.120 +If the world is to be simulated on a computer, then not only
   1.121 +do you have to worry about whether the character's senses
   1.122 +are rich enough to learn from the world, but whether the
   1.123 +world itself is rendered with enough detail and realism to
   1.124 +give enough working material to the character's senses. To
   1.125 +name just a few difficulties facing modern physics
   1.126 +simulators: destructibility of the environment, simulation
   1.127 +of water/other fluids, large areas, nonrigid bodies, lots of
   1.128 +objects, smoke. I don't know of any computer simulation that
   1.129 +would allow a character to take a rock and grind it into
   1.130 +fine dust, then use that dust to make a clay sculpture, at
   1.131 +least not without spending years calculating the
   1.132 +interactions of every single small grain of dust. Maybe a
   1.133 +simulated world with today's limitations doesn't provide
   1.134  enough richness for real intelligence to evolve.
   1.135  
   1.136  ** Reality
   1.137  
   1.138 -The other approach for playing with senses is to hook your software up
   1.139 -to real cameras, microphones, robots, etc., and let it loose in the
   1.140 -real world. This has the advantage of eliminating concerns about
   1.141 -simulating the world at the expense of increasing the complexity of
   1.142 -implementing the senses. Instead of just grabbing the current rendered
   1.143 -frame for processing, you have to use an actual camera with real
   1.144 -lenses and interact with photons to get an image. It is much harder to
   1.145 -change the character, which is now partly a physical robot of some
   1.146 -sort, since doing so involves changing things around in the real world
   1.147 -instead of modifying lines of code. While the real world is very rich
   1.148 -and definitely provides enough stimulation for intelligence to develop
   1.149 -as evidenced by our own existence, it is also uncontrollable in the
   1.150 -sense that a particular situation cannot be recreated perfectly or
   1.151 -saved for later use. It is harder to conduct science because it is
   1.152 -harder to repeat an experiment. The worst thing about using the real
   1.153 -world instead of a simulation is the matter of time. Instead of
   1.154 -simulated time you get the constant and unstoppable flow of real
   1.155 -time. This severely limits the sorts of software you can use to
   1.156 -program the AI because all sense inputs must be handled in real
   1.157 -time. Complicated ideas may have to be implemented in hardware or may
   1.158 -simply be impossible given the current speed of our
   1.159 -processors. Contrast this with a simulation, in which the flow of time
   1.160 -in the simulated world can be slowed down to accommodate the
   1.161 -limitations of the character's programming. In terms of cost, doing
   1.162 -everything in software is far cheaper than building custom real-time
   1.163 +The other approach for playing with senses is to hook your
   1.164 +software up to real cameras, microphones, robots, etc., and
   1.165 +let it loose in the real world. This has the advantage of
   1.166 +eliminating concerns about simulating the world at the
   1.167 +expense of increasing the complexity of implementing the
   1.168 +senses. Instead of just grabbing the current rendered frame
   1.169 +for processing, you have to use an actual camera with real
   1.170 +lenses and interact with photons to get an image. It is much
   1.171 +harder to change the character, which is now partly a
   1.172 +physical robot of some sort, since doing so involves
   1.173 +changing things around in the real world instead of
   1.174 +modifying lines of code. While the real world is very rich
   1.175 +and definitely provides enough stimulation for intelligence
   1.176 +to develop as evidenced by our own existence, it is also
   1.177 +uncontrollable in the sense that a particular situation
   1.178 +cannot be recreated perfectly or saved for later use. It is
   1.179 +harder to conduct science because it is harder to repeat an
   1.180 +experiment. The worst thing about using the real world
   1.181 +instead of a simulation is the matter of time. Instead of
   1.182 +simulated time you get the constant and unstoppable flow of
   1.183 +real time. This severely limits the sorts of software you
   1.184 +can use to program the AI because all sense inputs must be
   1.185 +handled in real time. Complicated ideas may have to be
   1.186 +implemented in hardware or may simply be impossible given
   1.187 +the current speed of our processors. Contrast this with a
   1.188 +simulation, in which the flow of time in the simulated world
   1.189 +can be slowed down to accommodate the limitations of the
   1.190 +character's programming. In terms of cost, doing everything
   1.191 +in software is far cheaper than building custom real-time
   1.192  hardware. All you need is a laptop and some patience.
   1.193  
   1.194  * Choose a Simulation Engine
   1.195  
   1.196 -Mainly because of issues with controlling the flow of time, I chose to
   1.197 -simulate both the world and the character. I set out to make a world
   1.198 -in which I could embed a character with multiple senses. My main goal
   1.199 -is to make an environment where I can perform further experiments in
   1.200 -simulated senses.
   1.201 +Mainly because of issues with controlling the flow of time,
   1.202 +I chose to simulate both the world and the character. I set
   1.203 +out to make a world in which I could embed a character with
   1.204 +multiple senses. My main goal is to make an environment
   1.205 +where I can perform further experiments in simulated senses.
   1.206  
   1.207 -I examined many different 3D environments to try and find something I
   1.208 -would use as the base for my simulation; eventually the choice came
   1.209 -down to three engines: the Quake II engine, the Source Engine, and
   1.210 -jMonkeyEngine.
   1.211 +I examined many different 3D environments to try and find
   1.212 +something I would use as the base for my simulation;
   1.213 +eventually the choice came down to three engines: the Quake
   1.214 +II engine, the Source Engine, and jMonkeyEngine.
   1.215  
   1.216  ** [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]]
   1.217  
   1.218 -I spent a bit more than a month working with the Quake II Engine from
   1.219 -ID software to see if I could use it for my purposes. All the source
   1.220 -code was released by ID software into the Public Domain several years
   1.221 -ago, and as a result it has been ported and modified for many
   1.222 -different reasons. This engine was famous for its advanced use of
   1.223 +I spent a bit more than a month working with the Quake II
   1.224 +Engine from ID software to see if I could use it for my
   1.225 +purposes. All the source code was released by ID software
   1.226 +into the Public Domain several years ago, and as a result it
   1.227 +has been ported and modified for many different
   1.228 +reasons. This engine was famous for its advanced use of
   1.229  realistic shading and had decent and fast physics
   1.230 -simulation. Researchers at Princeton [[http://papers.cnl.salk.edu/PDFs/Intracelllular%20Dynamics%20of%20Virtual%20Place%20Cells%202011-4178.pdf][used this code]] ([[http://brainwindows.wordpress.com/2009/10/14/playing-quake-with-a-real-mouse/][video]]) to study
   1.231 -spatial information encoding in the hippocampal cells of rats. Those
   1.232 -researchers created a special Quake II level that simulated a maze,
   1.233 -and added an interface where a mouse could run on top of a ball in
   1.234 -various directions to move the character in the simulated maze. They
   1.235 -measured hippocampal activity during this exercise to try and tease
   1.236 -out the method in which spatial data was stored in that area of the
   1.237 -brain. I find this promising because if a real living rat can interact
   1.238 -with a computer simulation of a maze in the same way as it interacts
   1.239 -with a real-world maze, then maybe that simulation is close enough to
   1.240 -reality that a simulated sense of vision and motor control interacting
   1.241 -with that simulation could reveal useful information about the real
   1.242 -thing. There is a Java port of the original C source code called
   1.243 -Jake2. The port demonstrates Java's OpenGL bindings and runs anywhere
   1.244 -from 90% to 105% as fast as the C version. After reviewing much of the
   1.245 -source of Jake2, I eventually rejected it because the engine is too
   1.246 -tied to the concept of a first-person shooter game. One of the
   1.247 -problems I had was that there do not seem to be any easy way to attach
   1.248 -multiple cameras to a single character. There are also several physics
   1.249 -clipping issues that are corrected in a way that only applies to the
   1.250 -main character and does not apply to arbitrary objects. While there is
   1.251 -a large community of level modders, I couldn't find a community to
   1.252 -support using the engine to make new things.
   1.253 +simulation. Researchers at Princeton [[http://papers.cnl.salk.edu/PDFs/Intracelllular%20Dynamics%20of%20Virtual%20Place%20Cells%202011-4178.pdf][used this code]] ([[http://brainwindows.wordpress.com/2009/10/14/playing-quake-with-a-real-mouse/][video]])
   1.254 +to study spatial information encoding in the hippocampal
   1.255 +cells of rats. Those researchers created a special Quake II
   1.256 +level that simulated a maze, and added an interface where a
   1.257 +mouse could run on top of a ball in various directions to
   1.258 +move the character in the simulated maze. They measured
   1.259 +hippocampal activity during this exercise to try and tease
   1.260 +out the method in which spatial data was stored in that area
   1.261 +of the brain. I find this promising because if a real living
   1.262 +rat can interact with a computer simulation of a maze in the
   1.263 +same way as it interacts with a real-world maze, then maybe
   1.264 +that simulation is close enough to reality that a simulated
   1.265 +sense of vision and motor control interacting with that
   1.266 +simulation could reveal useful information about the real
   1.267 +thing. There is a Java port of the original C source code
   1.268 +called Jake2. The port demonstrates Java's OpenGL bindings
   1.269 +and runs anywhere from 90% to 105% as fast as the C
   1.270 +version. After reviewing much of the source of Jake2, I
   1.271 +rejected it because the engine is too tied to the concept of
   1.272 +a first-person shooter game. One of the problems I had was
   1.273 +that there does not seem to be any easy way to attach
   1.274 +multiple cameras to a single character. There are also
   1.275 +several physics clipping issues that are corrected in a way
   1.276 +that only applies to the main character and do not apply to
   1.277 +arbitrary objects. While there is a large community of level
   1.278 +modders, I couldn't find a community to support using the
   1.279 +engine to make new things.
   1.280  
   1.281  ** [[http://source.valvesoftware.com/][Source Engine]]
   1.282  
   1.283 -The Source Engine evolved from the Quake II and Quake I engines and is
   1.284 -used by Valve in the Half-Life series of games. The physics simulation
   1.285 -in the Source Engine is quite accurate and probably the best out of
   1.286 -all the engines I investigated. There is also an extensive community
   1.287 -actively working with the engine. However, applications that use the
   1.288 -Source Engine must be written in C++, the code is not open, it only
   1.289 -runs on Windows, and the tools that come with the SDK to handle models
   1.290 -and textures are complicated and awkward to use.
   1.291 +The Source Engine evolved from the Quake II and Quake I
   1.292 +engines and is used by Valve in the Half-Life series of
   1.293 +games. The physics simulation in the Source Engine is quite
   1.294 +accurate and probably the best out of all the engines I
   1.295 +investigated. There is also an extensive community actively
   1.296 +working with the engine. However, applications that use the
   1.297 +Source Engine must be written in C++, the code is not open,
   1.298 +it only runs on Windows, and the tools that come with the
   1.299 +SDK to handle models and textures are complicated and
   1.300 +awkward to use.
   1.301  
   1.302  ** [[http://jmonkeyengine.com/][jMonkeyEngine3]]
   1.303  
   1.304 -jMonkeyEngine is a new library for creating games in Java. It uses
   1.305 -OpenGL to render to the screen and uses screengraphs to avoid drawing
   1.306 -things that do not appear on the screen. It has an active community
   1.307 -and several games in the pipeline. The engine was not built to serve
   1.308 -any particular game but is instead meant to be used for any 3D
   1.309 -game. After experimenting with each of these three engines and a few
   1.310 -others for about 2 months I settled on jMonkeyEngine. I chose it
   1.311 -because it had the most features out of all the open projects I looked
   1.312 -at, and because I could then write my code in Clojure, an
   1.313 -implementation of LISP that runs on the JVM.
   1.314 +jMonkeyEngine is a new library for creating games in
   1.315 +Java. It uses OpenGL to render to the screen and uses
   1.316 +screengraphs to avoid drawing things that do not appear on
   1.317 +the screen. It has an active community and several games in
   1.318 +the pipeline. The engine was not built to serve any
   1.319 +particular game but is instead meant to be used for any 3D
   1.320 +game. After experimenting with each of these three engines
   1.321 +and a few others for about 2 months I settled on
   1.322 +jMonkeyEngine. I chose it because it had the most features
   1.323 +out of all the open projects I looked at, and because I
   1.324 +could then write my code in Clojure, an implementation of
   1.325 +LISP that runs on the JVM.
   1.326  
   1.327  
   1.328