Mercurial > cortex
view org/intro.org @ 32:97703c7f020e
updated org files
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Thu, 27 Oct 2011 23:41:14 -0700 |
parents | 6372c108c5c6 |
children | 183744c179e6 |
line wrap: on
line source
1 #+title: Simulated Senses2 #+author: Robert McIntyre3 #+email: rlm@mit.edu4 #+description: Simulating senses for AI research using JMonkeyEngine35 #+SETUPFILE: ../../aurellem/org/setup.org6 #+INCLUDE: ../../aurellem/org/level-0.org7 #+babel: :mkdirp yes :noweb yes9 * Background10 Artificial Intelligence has tried and failed for more than half a11 century to produce programs as flexible, creative, and "intelligent"12 as the human mind itself. Clearly, we are still missing some important13 ideas concerning intelligent programs or we would have strong AI14 already. What idea could be missing?16 When Turing first proposed his famous "Turing Test" in the17 groundbreaking paper [[./sources/turing.pdf][/Computing Machines and Intelligence/]], he gave18 little importance to how a computer program might interact with the19 world:21 #+BEGIN_QUOTE22 \ldquo{}We need not be too concerned about the legs, eyes, etc. The example of23 Miss Helen Keller shows that education can take place provided that24 communication in both directions between teacher and pupil can take25 place by some means or other.\rdquo{}26 #+END_QUOTE28 And from the example of Hellen Keller he went on to assume that the29 only thing a fledgling AI program could need by way of communication30 is a teletypewriter. But Hellen Keller did possess vision and hearing31 for the first few months of her life, and her tactile sense was far32 more rich than any text-stream could hope to achieve. She possessed a33 body she could move freely, and had continual access to the real world34 to learn from her actions.36 I believe that our programs are suffering from too little sensory37 input to become really intelligent. Imagine for a moment that you38 lived in a world completely cut off form all sensory stimulation. You39 have no eyes to see, no ears to hear, no mouth to speak. No body, no40 taste, no feeling whatsoever. The only sense you get at all is a41 single point of light, flickering on and off in the void. If this was42 your life from birth, you would never learn anything, and could never43 become intelligent. Actual humans placed in sensory deprivation44 chambers experience hallucinations and can begin to loose their sense45 of reality in as little as 15 minutes[sensory-deprivation]. Most of46 the time, the programs we write are in exactly this situation. They do47 not interface with cameras and microphones, and they do not control a48 real or simulated body or interact with any sort of world.50 * Simulation vs. Reality51 I want demonstrate that multiple senses are what enable52 intelligence. There are two ways of playing around with senses and53 computer programs:55 The first is to go entirely with simulation: virtual world, virtual56 character, virtual senses. The advantages are that when everything is57 a simulation, experiments in that simulation are absolutely58 reproducible. It's also easier to change the character and world to59 explore new situations and different sensory combinations.62 ** Issues with Simulation64 If the world is to be simulated on a computer, then not only do you65 have to worry about whether the character's senses are rich enough to66 learn from the world, but whether the world itself is rendered with67 enough detail and realism to give enough working material to the68 character's senses. To name just a few difficulties facing modern69 physics simulators: destructibility of the environment, simulation of70 water/other fluids, large areas, nonrigid bodies, lots of objects,71 smoke. I don't know of any computer simulation that would allow a72 character to take a rock and grind it into fine dust, then use that73 dust to make a clay sculpture, at least not without spending years74 calculating the interactions of every single small grain of75 dust. Maybe a simulated world with today's limitations doesn't provide76 enough richness for real intelligence to evolve.78 ** Issues with Reality80 The other approach for playing with senses is to hook your software up81 to real cameras, microphones, robots, etc., and let it loose in the82 real world. This has the advantage of eliminating concerns about83 simulating the world at the expense of increasing the complexity of84 implementing the senses. Instead of just grabbing the current rendered85 frame for processing, you have to use an actual camera with real86 lenses and interact with photons to get an image. It is much harder to87 change the character, which is now partly a physical robot of some88 sort, since doing so involves changing things around in the real world89 instead of modifying lines of code. While the real world is very rich90 and definitely provides enough stimulation for intelligence to develop91 as evidenced by our own existence, it is also uncontrollable in the92 sense that a particular situation cannot be recreated perfectly or93 saved for later use. It is harder to conduct science because it is94 harder to repeat an experiment. The worst thing about using the real95 world instead of a simulation is the matter of time. Instead of96 simulated time you get the constant and unstoppable flow of real97 time. This severely limits the sorts of software you can use to98 program the AI because all sense inputs must be handled in real99 time. Complicated ideas may have to be implemented in hardware or may100 simply be impossible given the current speed of our101 processors. Contrast this with a simulation, in which the flow of time102 in the simulated world can be slowed down to accommodate the103 limitations of the character's programming. In terms of cost, doing104 everything in software is far cheaper than building custom real-time105 hardware. All you need is a laptop and some patience.107 * Choose a Simulation Engine109 Mainly because of issues with controlling the flow of time, I chose to110 simulate both the world and the character. I set out to make a minimal111 world in which I could embed a character with multiple senses. My main112 goal is to make an environment where I can perform further experiments113 in simulated senses.115 As Carl Sagan once said, "If you wish to make an apple pie from116 scratch, you must first invent the universe." I examined many117 different 3D environments to try and find something I would use as the118 base for my simulation; eventually the choice came down to three119 engines: the Quake II engine, the Source Engine, and jMonkeyEngine.121 ** [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]]123 I spent a bit more than a month working with the Quake II Engine from124 ID software to see if I could use it for my purposes. All the source125 code was released by ID software into the Public Domain several years126 ago, and as a result it has been ported and modified for many127 different reasons. This engine was famous for its advanced use of128 realistic shading and had decent and fast physics129 simulation. Researchers at Princeton [[http://www.nature.com/nature/journal/v461/n7266/pdf/nature08499.pdf][used this code]] to study spatial130 information encoding in the hippocampal cells of rats. Those131 researchers created a special Quake II level that simulated a maze,132 and added an interface where a mouse could run around inside a ball in133 various directions to move the character in the simulated maze. They134 measured hippocampal activity during this exercise to try and tease135 out the method in which spatial data was stored in that area of the136 brain. I find this promising because if a real living rat can interact137 with a computer simulation of a maze in the same way as it interacts138 with a real-world maze, then maybe that simulation is close enough to139 reality that a simulated sense of vision and motor control interacting140 with that simulation could reveal useful information about the real141 thing. There is a Java port of the original C source code called142 Jake2. The port demonstrates Java's OpenGL bindings and runs anywhere143 from 90% to 105% as fast as the C version. After reviewing much of the144 source of Jake2, I eventually rejected it because the engine is too145 tied to the concept of a first-person shooter game. One of the146 problems I had was that there do not seem to be any easy way to attach147 multiple cameras to a single character. There are also several physics148 clipping issues that are corrected in a way that only applies to the149 main character and does not apply to arbitrary objects. While there is150 a large community of level modders, I couldn't find a community to151 support using the engine to make new things.153 ** [[http://source.valvesoftware.com/][Source Engine]]155 The Source Engine evolved from the Quake II and Quake I engines and is156 used by Valve in the Half-Life series of games. The physics simulation157 in the Source Engine is quite accurate and probably the best out of158 all the engines I investigated. There is also an extensive community159 actively working with the engine. However, applications that use the160 Source Engine must be written in C++, the code is not open, it only161 runs on Windows, and the tools that come with the SDK to handle models162 and textures are complicated and awkward to use.164 ** [[http://jmonkeyengine.com/][jMonkeyEngine3]]166 jMonkeyEngine is a new library for creating games in Java. It uses167 OpenGL to render to the screen and uses screengraphs to avoid drawing168 things that do not appear on the screen. It has an active community169 and several games in the pipeline. The engine was not built to serve170 any particular game but is instead meant to be used for any 3D171 game. After experimenting with each of these three engines and a few172 others for about 2 months I settled on jMonkeyEngine. I chose it173 because it had the most features out of all the open projects I looked174 at, and because I could then write my code in Clojure, an175 implementation of LISP that runs on the JVM...