view org/intro.org @ 272:12e6231eae8e

Revised the RenderManager diagram.
author Dylan Holmes <ocsenave@gmail.com>
date Tue, 14 Feb 2012 23:12:56 -0600
parents 16cbce075a0b
children 7e7f8d6d9ec5
line wrap: on
line source
1 #+title: Simulated Senses
2 #+author: Robert McIntyre
3 #+email: rlm@mit.edu
4 #+description: Simulating senses for AI research using JMonkeyEngine3
5 #+keywords: Alan Turing, AI, sinulated senses, jMonkeyEngine3, virtual world
6 #+SETUPFILE: ../../aurellem/org/setup.org
7 #+INCLUDE: ../../aurellem/org/level-0.org
8 #+babel: :mkdirp yes :noweb yes
10 * Background
11 Artificial Intelligence has tried and failed for more than half a
12 century to produce programs as flexible, creative, and "intelligent"
13 as the human mind itself. Clearly, we are still missing some important
14 ideas concerning intelligent programs or we would have strong AI
15 already. What idea could be missing?
17 When Turing first proposed his famous "Turing Test" in the
18 groundbreaking paper [[../sources/turing.pdf][/Computing Machines and Intelligence/]], he gave
19 little importance to how a computer program might interact with the
20 world:
22 #+BEGIN_QUOTE
23 \ldquo{}We need not be too concerned about the legs, eyes, etc. The example of
24 Miss Helen Keller shows that education can take place provided that
25 communication in both directions between teacher and pupil can take
26 place by some means or other.\rdquo{}
27 #+END_QUOTE
29 And from the example of Hellen Keller he went on to assume that the
30 only thing a fledgling AI program could need by way of communication
31 is a teletypewriter. But Hellen Keller did possess vision and hearing
32 for the first few months of her life, and her tactile sense was far
33 more rich than any text-stream could hope to achieve. She possessed a
34 body she could move freely, and had continual access to the real world
35 to learn from her actions.
37 I believe that our programs are suffering from too little sensory
38 input to become really intelligent. Imagine for a moment that you
39 lived in a world completely cut off form all sensory stimulation. You
40 have no eyes to see, no ears to hear, no mouth to speak. No body, no
41 taste, no feeling whatsoever. The only sense you get at all is a
42 single point of light, flickering on and off in the void. If this was
43 your life from birth, you would never learn anything, and could never
44 become intelligent. Actual humans placed in sensory deprivation
45 chambers experience hallucinations and can begin to loose their sense
46 of reality. Most of the time, the programs we write are in exactly
47 this situation. They do not interface with cameras and microphones,
48 and they do not control a real or simulated body or interact with any
49 sort of world.
51 * Simulation vs. Reality
52 I want demonstrate that multiple senses are what enable
53 intelligence. There are two ways of playing around with senses and
54 computer programs:
57 ** Simulation
58 The first is to go entirely with simulation: virtual world, virtual
59 character, virtual senses. The advantages are that when everything is
60 a simulation, experiments in that simulation are absolutely
61 reproducible. It's also easier to change the character and world to
62 explore new situations and different sensory combinations.
64 If the world is to be simulated on a computer, then not only do you
65 have to worry about whether the character's senses are rich enough to
66 learn from the world, but whether the world itself is rendered with
67 enough detail and realism to give enough working material to the
68 character's senses. To name just a few difficulties facing modern
69 physics simulators: destructibility of the environment, simulation of
70 water/other fluids, large areas, nonrigid bodies, lots of objects,
71 smoke. I don't know of any computer simulation that would allow a
72 character to take a rock and grind it into fine dust, then use that
73 dust to make a clay sculpture, at least not without spending years
74 calculating the interactions of every single small grain of
75 dust. Maybe a simulated world with today's limitations doesn't provide
76 enough richness for real intelligence to evolve.
78 ** Reality
80 The other approach for playing with senses is to hook your software up
81 to real cameras, microphones, robots, etc., and let it loose in the
82 real world. This has the advantage of eliminating concerns about
83 simulating the world at the expense of increasing the complexity of
84 implementing the senses. Instead of just grabbing the current rendered
85 frame for processing, you have to use an actual camera with real
86 lenses and interact with photons to get an image. It is much harder to
87 change the character, which is now partly a physical robot of some
88 sort, since doing so involves changing things around in the real world
89 instead of modifying lines of code. While the real world is very rich
90 and definitely provides enough stimulation for intelligence to develop
91 as evidenced by our own existence, it is also uncontrollable in the
92 sense that a particular situation cannot be recreated perfectly or
93 saved for later use. It is harder to conduct science because it is
94 harder to repeat an experiment. The worst thing about using the real
95 world instead of a simulation is the matter of time. Instead of
96 simulated time you get the constant and unstoppable flow of real
97 time. This severely limits the sorts of software you can use to
98 program the AI because all sense inputs must be handled in real
99 time. Complicated ideas may have to be implemented in hardware or may
100 simply be impossible given the current speed of our
101 processors. Contrast this with a simulation, in which the flow of time
102 in the simulated world can be slowed down to accommodate the
103 limitations of the character's programming. In terms of cost, doing
104 everything in software is far cheaper than building custom real-time
105 hardware. All you need is a laptop and some patience.
107 * Choose a Simulation Engine
109 Mainly because of issues with controlling the flow of time, I chose to
110 simulate both the world and the character. I set out to make a world
111 in which I could embed a character with multiple senses. My main goal
112 is to make an environment where I can perform further experiments in
113 simulated senses.
115 I examined many different 3D environments to try and find something I
116 would use as the base for my simulation; eventually the choice came
117 down to three engines: the Quake II engine, the Source Engine, and
118 jMonkeyEngine.
120 ** [[http://www.idsoftware.com][Quake II]]/[[http://www.bytonic.de/html/jake2.html][Jake2]]
122 I spent a bit more than a month working with the Quake II Engine from
123 ID software to see if I could use it for my purposes. All the source
124 code was released by ID software into the Public Domain several years
125 ago, and as a result it has been ported and modified for many
126 different reasons. This engine was famous for its advanced use of
127 realistic shading and had decent and fast physics
128 simulation. Researchers at Princeton [[http://papers.cnl.salk.edu/PDFs/Intracelllular%20Dynamics%20of%20Virtual%20Place%20Cells%202011-4178.pdf][used this code]] ([[http://brainwindows.wordpress.com/2009/10/14/playing-quake-with-a-real-mouse/][video]]) to study spatial
129 information encoding in the hippocampal cells of rats. Those
130 researchers created a special Quake II level that simulated a maze,
131 and added an interface where a mouse could run around inside a ball in
132 various directions to move the character in the simulated maze. They
133 measured hippocampal activity during this exercise to try and tease
134 out the method in which spatial data was stored in that area of the
135 brain. I find this promising because if a real living rat can interact
136 with a computer simulation of a maze in the same way as it interacts
137 with a real-world maze, then maybe that simulation is close enough to
138 reality that a simulated sense of vision and motor control interacting
139 with that simulation could reveal useful information about the real
140 thing. There is a Java port of the original C source code called
141 Jake2. The port demonstrates Java's OpenGL bindings and runs anywhere
142 from 90% to 105% as fast as the C version. After reviewing much of the
143 source of Jake2, I eventually rejected it because the engine is too
144 tied to the concept of a first-person shooter game. One of the
145 problems I had was that there do not seem to be any easy way to attach
146 multiple cameras to a single character. There are also several physics
147 clipping issues that are corrected in a way that only applies to the
148 main character and does not apply to arbitrary objects. While there is
149 a large community of level modders, I couldn't find a community to
150 support using the engine to make new things.
152 ** [[http://source.valvesoftware.com/][Source Engine]]
154 The Source Engine evolved from the Quake II and Quake I engines and is
155 used by Valve in the Half-Life series of games. The physics simulation
156 in the Source Engine is quite accurate and probably the best out of
157 all the engines I investigated. There is also an extensive community
158 actively working with the engine. However, applications that use the
159 Source Engine must be written in C++, the code is not open, it only
160 runs on Windows, and the tools that come with the SDK to handle models
161 and textures are complicated and awkward to use.
163 ** [[http://jmonkeyengine.com/][jMonkeyEngine3]]
165 jMonkeyEngine is a new library for creating games in Java. It uses
166 OpenGL to render to the screen and uses screengraphs to avoid drawing
167 things that do not appear on the screen. It has an active community
168 and several games in the pipeline. The engine was not built to serve
169 any particular game but is instead meant to be used for any 3D
170 game. After experimenting with each of these three engines and a few
171 others for about 2 months I settled on jMonkeyEngine. I chose it
172 because it had the most features out of all the open projects I looked
173 at, and because I could then write my code in Clojure, an
174 implementation of LISP that runs on the JVM.