rlm@57
|
1 #+TITLE:Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview
|
rlm@57
|
2 #+AUTHOR:Dylan Holmes
|
rlm@57
|
3 #+EMAIL:
|
rlm@57
|
4 #+STYLE: <link rel="stylesheet" type="text/css" href="../css/sloman.css" />
|
rlm@57
|
5
|
rlm@57
|
6
|
rlm@57
|
7 #+BEGIN_QUOTE
|
rlm@57
|
8
|
rlm@57
|
9
|
rlm@57
|
10
|
rlm@57
|
11
|
rlm@57
|
12
|
rlm@57
|
13
|
rlm@57
|
14
|
rlm@57
|
15
|
rlm@57
|
16
|
rlm@57
|
17
|
rlm@57
|
18
|
rlm@57
|
19
|
rlm@57
|
20
|
rlm@57
|
21
|
rlm@57
|
22
|
rlm@57
|
23 *Editor's note:* This is a working draft transcript which I made of
|
rlm@57
|
24 [[http://www.youtube.com/watch?feature=player_detailpage&v=iuH8dC7Snno][this nice interview]] of Aaron Sloman. Having just finished one
|
rlm@57
|
25 iteration of transcription, I still need to go in and clean up the
|
rlm@57
|
26 formatting and fix the parts that I misheard, so you can expect the
|
rlm@57
|
27 text to improve significantly in the near future.
|
rlm@57
|
28
|
rlm@57
|
29 To the extent that this is my work, you have my permission to make
|
rlm@57
|
30 copies of this transcript for your own purposes. Also, feel free to
|
rlm@57
|
31 e-mail me with comments or corrections.
|
rlm@57
|
32
|
rlm@57
|
33 You can send mail to =transcript@aurellem.org=.
|
rlm@57
|
34
|
rlm@57
|
35 Cheers,
|
rlm@57
|
36
|
rlm@57
|
37 ---Dylan
|
rlm@57
|
38 #+END_QUOTE
|
rlm@57
|
39
|
rlm@57
|
40
|
rlm@57
|
41
|
rlm@57
|
42 * Introduction
|
rlm@57
|
43
|
rlm@57
|
44 ** Aaron Sloman evolves into a philosopher of AI
|
rlm@57
|
45 [0:09] My name is Aaron Sloman. My first degree many years ago in
|
rlm@57
|
46 Capetown University was in Physics and Mathematics, and I intended to
|
rlm@57
|
47 go and be a mathematician. I came to Oxford and encountered
|
rlm@57
|
48 philosophers --- I had started reading philosophy and discussing
|
rlm@57
|
49 philosophy before then, and then I found that there were philosophers
|
rlm@57
|
50 who said things about mathematics that I thought were wrong, so
|
rlm@57
|
51 gradually got more and more involved in [philosophy] discussions and
|
rlm@57
|
52 switched to doing philosophy DPhil. Then I became a philosophy
|
rlm@57
|
53 lecturer and about six years later, I was introduced to artificial
|
rlm@57
|
54 intelligence when I was a lecturer at Sussex University in philosophy
|
rlm@57
|
55 and I very soon became convinced that the best way to make progress in
|
rlm@57
|
56 both areas of philosophy (including philosophy of mathematics which I
|
rlm@57
|
57 felt i hadn't dealt with adequately in my DPhil) about the philosophy
|
rlm@57
|
58 of mathematics, philosophy of mind, philsophy of language and all
|
rlm@57
|
59 those things---the best way was to try to design and test working
|
rlm@57
|
60 fragments of mind and maybe eventually put them all together but
|
rlm@57
|
61 initially just working fragments that would do various things.
|
rlm@57
|
62
|
rlm@57
|
63 [1:12] And I learned to program and ~ with various other people
|
rlm@57
|
64 including ~Margaret Boden whom you've interviewed, developed---helped
|
rlm@57
|
65 develop an undergraduate degree in AI and other things and also began
|
rlm@57
|
66 to do research in AI and so on which I thought of as doing philosophy,
|
rlm@57
|
67 primarily.
|
rlm@57
|
68
|
rlm@57
|
69 [1:29] And then I later moved to the University of Birmingham and I
|
rlm@57
|
70 was there --- I came in 1991 --- and I've been retired for a while but
|
rlm@57
|
71 I'm not interested in golf or gardening so I just go on doing full
|
rlm@57
|
72 time research and my department is happy to keep me on without paying
|
rlm@57
|
73 me and provide space and resources and I come, meeting bright people
|
rlm@57
|
74 at conferences and try to learn and make progress if I can.
|
rlm@57
|
75
|
rlm@57
|
76 ** AI is hard, in part because there are tempting non-problems.
|
rlm@57
|
77
|
rlm@57
|
78 One of the things I learnt and understood more and more over the many
|
rlm@57
|
79 years --- forty years or so since I first encountered AI --- is how
|
rlm@57
|
80 hard the problems are, and in part that's because it's very often
|
rlm@57
|
81 tempting to /think/ the problem is something different from what it
|
rlm@57
|
82 actually is, and then people design solutions to the non-problems, and
|
rlm@57
|
83 I think of most of my work now as just helping to clarify what the
|
rlm@57
|
84 problems are: what is it that we're trying to explain --- and maybe
|
rlm@57
|
85 this is leading into what you wanted to talk about:
|
rlm@57
|
86
|
rlm@57
|
87 I now think that one of the ways of getting a deep understanding of
|
rlm@57
|
88 that is to find out what were the problems that biological evolution
|
rlm@57
|
89 solved, because we are a product of /many/ solutions to /many/
|
rlm@57
|
90 problems, and if we just try to go in and work out what the whole
|
rlm@57
|
91 system is doing, we may get it all wrong, or badly wrong.
|
rlm@57
|
92
|
rlm@57
|
93
|
rlm@57
|
94 * What problems of intelligence did evolution solve?
|
rlm@57
|
95
|
rlm@57
|
96 ** Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.
|
rlm@57
|
97
|
rlm@57
|
98 [2:57] Well, first I would challenge that we are the dominant
|
rlm@57
|
99 species. I know it looks like that but actually if you count biomass,
|
rlm@57
|
100 if you count number of species, if you count number of individuals,
|
rlm@57
|
101 the dominant species are microbes --- maybe not one of them but anyway
|
rlm@57
|
102 they're the ones who dominate in that sense, and furthermore we are
|
rlm@57
|
103 mostly --- we are largely composed of microbes, without which we
|
rlm@57
|
104 wouldn't survive.
|
rlm@57
|
105
|
rlm@57
|
106
|
rlm@57
|
107 # ** Many nonlinguistic competences require sophisticated internal representations
|
rlm@57
|
108 [3:27] But there are things that make humans (you could say) best at
|
rlm@57
|
109 those things, or worst at those things, but it's a combination. And I
|
rlm@57
|
110 think it was a collection of developments of which there isn't any
|
rlm@57
|
111 single one. [] there might be, some people say, human language which
|
rlm@57
|
112 changed everything. By our human language, they mean human
|
rlm@57
|
113 communication in words, but I think that was a later development from
|
rlm@57
|
114 what must have started as the use of /internal/ forms of
|
rlm@57
|
115 representation --- which are there in nest-building birds, in
|
rlm@57
|
116 pre-verbal children, in hunting mammals --- because you can't take in
|
rlm@57
|
117 information about a complex structured environment in which things can
|
rlm@57
|
118 change and you may have to be able to work out what's possible and
|
rlm@57
|
119 what isn't possible, without having some way of representing the
|
rlm@57
|
120 components of the environment, their relationships, the kinds of
|
rlm@57
|
121 things they can and can't do, the kinds of things you might or might
|
rlm@57
|
122 not be able to do --- and /that/ kind of capability needs internal
|
rlm@57
|
123 languages, and I and colleagues [at Birmingham] have been referring to
|
rlm@57
|
124 them as generalized languages because some people object to
|
rlm@57
|
125 referring...to using language to refer to something that isn't used
|
rlm@57
|
126 for communication. But from that viewpoint, not only humans but many
|
rlm@57
|
127 other animals developed abilities to do things to their environment to
|
rlm@57
|
128 make them more friendly to themselves, which depended on being able to
|
rlm@57
|
129 represent possible futures, possible actions, and work out what's the
|
rlm@57
|
130 best thing to do.
|
rlm@57
|
131
|
rlm@57
|
132 [5:13] And nest-building in corvids for instance---crows, magpies,
|
rlm@57
|
133 [hawks], and so on --- are way beyond what current robots can do, and
|
rlm@57
|
134 in fact I think most humans would be challenged if they had to go and
|
rlm@57
|
135 find a collection of twigs, one at a time, maybe bring them with just
|
rlm@57
|
136 one hand --- or with your mouth --- and assemble them into a
|
rlm@57
|
137 structure that, you know, is shaped like a nest, and is fairly rigid,
|
rlm@57
|
138 and you could trust your eggs in them when wind blows. But they're
|
rlm@57
|
139 doing it, and so ... they're not our evolutionary ancestors, but
|
rlm@57
|
140 they're an indication --- and that example is an indication --- of
|
rlm@57
|
141 what must have evolved in order to provide control over the
|
rlm@57
|
142 environment in /that/ species.
|
rlm@57
|
143
|
rlm@57
|
144 ** Speculation about how communication might have evolved from internal lanagues.
|
rlm@57
|
145 [5:56] And I think hunting mammals, fruit-picking mammals, mammals
|
rlm@57
|
146 that can rearrange parts of the environment, provide shelters, needed
|
rlm@57
|
147 to have .... also needed to have ways of representing possible
|
rlm@57
|
148 futures, not just what's there in the environment. I think at a later
|
rlm@57
|
149 stage, that developed into a form of communication, or rather the
|
rlm@57
|
150 /internal/ forms of representation became usable as a basis for
|
rlm@57
|
151 providing [context] to be communicated. And that happened, I think,
|
rlm@57
|
152 initially through performing actions that expressed intentions, and
|
rlm@57
|
153 probably led to situtations where an action (for instance, moving some
|
rlm@57
|
154 large object) was performed more easily, or more successfully, or more
|
rlm@57
|
155 accurately if it was done collaboratively. So someone who had worked
|
rlm@57
|
156 out what to do might start doing it, and then a conspecific might be
|
rlm@57
|
157 able to work out what the intention is, because that person has the
|
rlm@57
|
158 /same/ forms of representation and can build theories about what's
|
rlm@57
|
159 going on, and might then be able to help.
|
rlm@57
|
160
|
rlm@57
|
161 [7:11] You can imagine that if that started happening more (a lot of
|
rlm@57
|
162 collaboration based on inferred intentions and plans) then sometimes
|
rlm@57
|
163 the inferences might be obscure and difficult, so the /actions/ might
|
rlm@57
|
164 be enhanced to provide signals as to what the intention is, and what
|
rlm@57
|
165 the best way is to help, and so on.
|
rlm@57
|
166
|
rlm@57
|
167 [7:35] So, this is all handwaving and wild speculation, but I think
|
rlm@57
|
168 it's consistent with a large collection of facts which one can look at
|
rlm@57
|
169 --- and find if one looks for them, but one won't know if [some]one
|
rlm@57
|
170 doesn't look for them --- about the way children, for instance, who
|
rlm@57
|
171 can't yet talk, communicate, and the things they'll do, like going to
|
rlm@57
|
172 the mother and turning the face to point in the direction where the
|
rlm@57
|
173 child wants it to look and so on; that's an extreme version of action
|
rlm@57
|
174 indicating intention.
|
rlm@57
|
175
|
rlm@57
|
176 [8:03] Anyway. That's a very long roundabout answer to one conjecture
|
rlm@57
|
177 that the use of communicative language is what gave humans their
|
rlm@57
|
178 unique power to create and destroy and whatever, and I'm saying that
|
rlm@57
|
179 if by that you mean /communicative/ language, then I'm saying there
|
rlm@57
|
180 was something before that which was /non/-communicative language, and I
|
rlm@57
|
181 suspect that noncommunicative language continues to play a deep role
|
rlm@57
|
182 in /all/ human perception ---in mathematical and scientific reasoning, in
|
rlm@57
|
183 problem solving --- and we don't understand very much about it.
|
rlm@57
|
184
|
rlm@57
|
185 [8:48]
|
rlm@57
|
186 I'm sure there's a lot more to be said about the development of
|
rlm@57
|
187 different kinds of senses, the development of brain structures and
|
rlm@57
|
188 mechanisms is above all that, but perhaps I've droned on long enough
|
rlm@57
|
189 on that question.
|
rlm@57
|
190
|
rlm@57
|
191
|
rlm@57
|
192 * How do language and internal states relate to AI?
|
rlm@57
|
193
|
rlm@57
|
194 [9:09] Well, I think most of the human and animal capabilities that
|
rlm@57
|
195 I've been referring to are not yet to be found in current robots or
|
rlm@57
|
196 [computing] systems, and I think there are two reasons for that: one
|
rlm@57
|
197 is that it's intrinsically very difficult; I think that in particular
|
rlm@57
|
198 it may turn out that the forms of information processing that one can
|
rlm@57
|
199 implement on digital computers as we currently know them may not be as
|
rlm@57
|
200 well suited to performing some of these tasks as other kinds of
|
rlm@57
|
201 computing about which we don't know so much --- for example, I think
|
rlm@57
|
202 there may be important special features about /chemical/ computers
|
rlm@57
|
203 which we might [talk about in a little bit? find out about].
|
rlm@57
|
204
|
rlm@57
|
205 ** In AI, false assumptions can lead investigators astray.
|
rlm@57
|
206 [9:57] So, one of the problems then is that the tasks are hard ... but
|
rlm@57
|
207 there's a deeper problem as to why AI hasn't made a great deal of
|
rlm@57
|
208 progress on these problems that I'm talking about, and that is that
|
rlm@57
|
209 most AI researchers assume things---and this is not just AI
|
rlm@57
|
210 researchers, but [also] philsophers, and psychologists, and people
|
rlm@57
|
211 studying animal behavior---make assumptions about what it is that
|
rlm@57
|
212 animals or humans do, for instance make assumptions about what vision
|
rlm@57
|
213 is for, or assumptions about what motivation is and how motivation
|
rlm@57
|
214 works, or assumptions about how learning works, and then they try ---
|
rlm@57
|
215 the AI people try --- to model [or] build systems that perform those
|
rlm@57
|
216 assumed functions. So if you get the /functions/ wrong, then even if
|
rlm@57
|
217 you implement some of the functions that you're trying to implement,
|
rlm@57
|
218 they won't necessarily perform the tasks that the initial objective
|
rlm@57
|
219 was to imitate, for instance the tasks that humans, and nest-building
|
rlm@57
|
220 birds, and monkeys and so on can perform.
|
rlm@57
|
221
|
rlm@57
|
222 ** Example: Vision is not just about finding surfaces, but about finding affordances.
|
rlm@57
|
223 [11:09] I'll give you a simple example --- well, maybe not so simple,
|
rlm@57
|
224 but --- It's often assumed that the function of vision in humans (and
|
rlm@57
|
225 in other animals with good eyesight and so on) is to take in optical
|
rlm@57
|
226 information that hits the retina, and form into the (maybe changing
|
rlm@57
|
227 --- or, really, in our case definitely changing) patterns of
|
rlm@57
|
228 illumination where there are sensory receptors that detect those
|
rlm@57
|
229 patterns, and then somehow from that information (plus maybe other
|
rlm@57
|
230 information gained from head movement or from comparisons between two
|
rlm@57
|
231 eyes) to work out what there was in the environment that produced
|
rlm@57
|
232 those patterns, and that is often taken to mean \ldquo{}where were the
|
rlm@57
|
233 surfaces off which the light bounced before it came to me\rdquo{}. So
|
rlm@57
|
234 you essentially think of the task of the visual system as being to
|
rlm@57
|
235 reverse the image formation process: so the 3D structure's there, the
|
rlm@57
|
236 lens causes the image to form in the retina, and then the brain goes
|
rlm@57
|
237 back to a model of that 3D structure there. That's a very plausible
|
rlm@57
|
238 theory about vision, and it may be that that's a /subset/ of what
|
rlm@57
|
239 human vision does, but I think James Gibson pointed out that that kind
|
rlm@57
|
240 of thing is not necessarily going to be very useful for an organism,
|
rlm@57
|
241 and it's very unlikely that that's the main function of perception in
|
rlm@57
|
242 general, namely to produce some physical description of what's out
|
rlm@57
|
243 there.
|
rlm@57
|
244
|
rlm@57
|
245 [12:37] What does an animal /need/? It needs to know what it can do,
|
rlm@57
|
246 what it can't do, what the consequences of its actions will be
|
rlm@57
|
247 .... so, he introduced the word /affordance/, so from his point of
|
rlm@57
|
248 view, the function of vision, perception, are to inform the organism
|
rlm@57
|
249 of what the /affordances/ are for action, where that would mean what
|
rlm@57
|
250 the animal, /given/ its morphology (what it can do with its mouth, its
|
rlm@57
|
251 limbs, and so on, and the ways it can move) what it can do, what its
|
rlm@57
|
252 needs are, what the obstacles are, and how the environment supports or
|
rlm@57
|
253 obstructs those possible actions.
|
rlm@57
|
254
|
rlm@57
|
255 [13:15] And that's a very different collection of information
|
rlm@57
|
256 structures that you need from, say, \ldquo{}where are all the
|
rlm@57
|
257 surfaces?\rdquo{}: if you've got all the surfaces, /deriving/ the
|
rlm@57
|
258 affordances would still be a major task. So, if you think of the
|
rlm@57
|
259 perceptual system as primarily (for biological organisms) being
|
rlm@57
|
260 devices that provide information about affordances and so on, then the
|
rlm@57
|
261 tasks look very different. And most of the people working, doing
|
rlm@57
|
262 research on computer vision in robots, I think haven't taken all that
|
rlm@57
|
263 on board, so they're trying to get machines to do things which, even
|
rlm@57
|
264 if they were successful, would not make the robots very intelligent
|
rlm@57
|
265 (and in fact, even the ones they're trying to do are not really easy
|
rlm@57
|
266 to do, and they don't succeed very well--- although, there's progress;
|
rlm@57
|
267 I shouldn't disparage it too much.)
|
rlm@57
|
268
|
rlm@57
|
269 ** Online and offline intelligence
|
rlm@57
|
270
|
rlm@57
|
271 [14:10] It gets more complex as animals get more sophisticated. So, I
|
rlm@57
|
272 like to make a distinction between online intelligence and offline
|
rlm@57
|
273 intelligence. So, for example, if I want to pick something up --- like
|
rlm@57
|
274 this leaf <he plucks a leaf from the table> --- I was able to select
|
rlm@57
|
275 it from all the others in there, and while moving my hand towards it,
|
rlm@57
|
276 I was able to guide its trajectory, making sure it was going roughly
|
rlm@57
|
277 in the right direction --- as opposed to going out there, which
|
rlm@57
|
278 wouldn't have been able to pick it up --- and these two fingers ended
|
rlm@57
|
279 up with a portion of the leaf between them, so that I was able to tell
|
rlm@57
|
280 when I'm ready to do that <he clamps the leaf between two fingers>
|
rlm@57
|
281 and at that point, I clamped my fingers and then I could pick up the
|
rlm@57
|
282 leaf.
|
rlm@57
|
283
|
rlm@57
|
284 [14:54] Whereas, --- and that's an example of online intelligence:
|
rlm@57
|
285 during the performance of an action (both from the stage where it's
|
rlm@57
|
286 initiated, and during the intermediate stages, and where it's
|
rlm@57
|
287 completed) I'm taking in information relevant to controlling all those
|
rlm@57
|
288 stages, and that relevant information keeps changing. That means I
|
rlm@57
|
289 need stores of transient information which gets discarded almost
|
rlm@57
|
290 immediately and replaced or something. That's online intelligence. And
|
rlm@57
|
291 there are many forms; that's just one example, and Gibson discussed
|
rlm@57
|
292 quite a lot of examples which I won't try to replicate now.
|
rlm@57
|
293
|
rlm@57
|
294 [15:30] But in offline intelligence, you're not necessarily actually
|
rlm@57
|
295 /performing/ the actions when you're using your intelligence; you're
|
rlm@57
|
296 thinking about /possible/ actions. So, for instance, I could think
|
rlm@57
|
297 about how fast or by what route I would get back to the lecture room
|
rlm@57
|
298 if I wanted to [get to the next talk] or something. And I know where
|
rlm@57
|
299 the door is, roughly speaking, and I know roughly which route I would
|
rlm@57
|
300 take, when I go out, I should go to the left or to the right, because
|
rlm@57
|
301 I've stored information about where the spaces are, where the
|
rlm@57
|
302 buildings are, where the door was that we came out --- but in using
|
rlm@57
|
303 that information to think about that route, I'm not actually
|
rlm@57
|
304 performing the action. I'm not even /simulating/ it in detail: the
|
rlm@57
|
305 precise details of direction and speed and when to clamp my fingers,
|
rlm@57
|
306 or when to contract my leg muscles when walking, are all irrelevant to
|
rlm@57
|
307 thinking about a good route, or thinking about the potential things
|
rlm@57
|
308 that might happen on the way. Or what would be a good place to meet
|
rlm@57
|
309 someone who I think [for an acquaintance in particular] --- [barber]
|
rlm@57
|
310 or something --- I don't necessarily have to work out exactly /where/
|
rlm@57
|
311 the person's going to stand, or from what angle I would recognize
|
rlm@57
|
312 them, and so on.
|
rlm@57
|
313
|
rlm@57
|
314 [16:46] So, offline intelligence --- which I think became not just a
|
rlm@57
|
315 human competence; I think there are other animals that have aspects of
|
rlm@57
|
316 it: Squirrels are very impressive as you watch them. Gray squirrels at
|
rlm@57
|
317 any rate, as you watch them defeating squirrel-proof birdfeeders, seem
|
rlm@57
|
318 to have a lot of that [offline intelligence], as well as the online
|
rlm@57
|
319 intelligence when they eventually perform the action they've worked
|
rlm@57
|
320 out [] that will get them to the nuts.
|
rlm@57
|
321
|
rlm@57
|
322 [17:16] And I think that what happened during our evolution is that
|
rlm@57
|
323 mechanisms for acquiring and processing and storing and manipulating
|
rlm@57
|
324 information that is more and more remote from the performance of
|
rlm@57
|
325 actions developed. An example is taking in information about where
|
rlm@57
|
326 locations are that you might need to go to infrequently: There's a
|
rlm@57
|
327 store of a particular type of material that's good for building on
|
rlm@57
|
328 roofs of houses or something out around there in some
|
rlm@57
|
329 direction. There's a good place to get water somewhere in another
|
rlm@57
|
330 direction. There are people that you'd like to go and visit in
|
rlm@57
|
331 another place, and so on.
|
rlm@57
|
332
|
rlm@57
|
333 [17:59] So taking in information about an extended environment and
|
rlm@57
|
334 building it into a structure that you can make use of for different
|
rlm@57
|
335 purposes is another example of offline intelligence. And when we do
|
rlm@57
|
336 that, we sometimes use only our brains, but in modern times, we also
|
rlm@57
|
337 learned how to make maps on paper and walls and so on. And it's not
|
rlm@57
|
338 clear whether the stuff inside our heads has the same structures as
|
rlm@57
|
339 the maps we make on paper: the maps on paper have a different
|
rlm@57
|
340 function; they may be used to communicate with others, or meant for
|
rlm@57
|
341 /looking/ at, whereas the stuff in your head you don't /look/ at; you
|
rlm@57
|
342 use it in some other way.
|
rlm@57
|
343
|
rlm@57
|
344 [18:46] So, what I'm getting at is that there's a great deal of human
|
rlm@57
|
345 intelligence (and animal intelligence) which is involved in what's
|
rlm@57
|
346 possible in the future, what exists in distant places, what might have
|
rlm@57
|
347 happened in the past (sometimes you need to know why something is as
|
rlm@57
|
348 it is, because that might be relevant to what you should or shouldn't
|
rlm@57
|
349 do in the future, and so on), and I think there was something about
|
rlm@57
|
350 human evolution that extended that offline intelligence way beyond
|
rlm@57
|
351 that of animals. And I don't think it was /just/ human language, (but
|
rlm@57
|
352 human language had something to do with it) but I think there was
|
rlm@57
|
353 something else that came earlier than language which involves the
|
rlm@57
|
354 ability to use your offline intelligence to discover something that
|
rlm@57
|
355 has a rich mathematical structure.
|
rlm@57
|
356
|
rlm@57
|
357 ** Example: Even toddlers use sophisticated geometric knowledge
|
rlm@57
|
358 #+<<example-gap>>
|
rlm@57
|
359 [19:44] I'll give you a simple example: if you look through a gap, you
|
rlm@57
|
360 can see something that's on the other side of the gap. Now, you
|
rlm@57
|
361 /might/ see what you want to see, or you might see only part of it. If
|
rlm@57
|
362 you want to see more of it, which way would you move? Well, you could
|
rlm@57
|
363 either move /sideways/, and see through the gap---and see it roughly
|
rlm@57
|
364 the same amount but a different part of it [if it's a ????], or you
|
rlm@57
|
365 could move /towards/ the gap and then your view will widen as you
|
rlm@57
|
366 approach the gap. Now, there's a bit of mathematics in there, insofar
|
rlm@57
|
367 as you are implicitly assuming that information travels in straight
|
rlm@57
|
368 lines, and as you go closer to a gap, the straight lines that you can
|
rlm@57
|
369 draw from where you are through the gap, widen as you approach that
|
rlm@57
|
370 gap. Now, there's a kind of theorem of Euclidean geometry in there
|
rlm@57
|
371 which I'm not going to try to state very precisely (and as far as I
|
rlm@57
|
372 know, wasn't stated explicitly in Euclidean geometry) but it's
|
rlm@57
|
373 something every toddler--- human toddler---learns. (Maybe other
|
rlm@57
|
374 animals also know it, I don't know.) But there are many more things,
|
rlm@57
|
375 actions to perform, to get you more information about things, actions
|
rlm@57
|
376 to perform to conceal information from other people, actions that will
|
rlm@57
|
377 enable you to operate, to act on a rigid object in one place in order
|
rlm@57
|
378 to produce an effect on another place. So, there's a lot of stuff that
|
rlm@57
|
379 involves lines and rotations and angles and speeds and so on that I
|
rlm@57
|
380 think humans (maybe, to a lesser extent, other animals) develop the
|
rlm@57
|
381 ability to think about in a generic way. That means that you could
|
rlm@57
|
382 take out the generalizations from the particular contexts and then
|
rlm@57
|
383 re-use them in a new contexts in ways that I think are not yet
|
rlm@57
|
384 represented at all in AI and in theories of human learning in any []
|
rlm@57
|
385 way --- although some people are trying to study learning of mathematics.
|
rlm@57
|
386
|
rlm@57
|
387 * Animal intelligence
|
rlm@57
|
388
|
rlm@57
|
389 ** The priority is /cataloguing/ what competences have evolved, not ranking them.
|
rlm@57
|
390 [22:03] I wasn't going to challenge the claim that humans can do more
|
rlm@57
|
391 sophisticated forms of [tracking], just to mention that there are some
|
rlm@57
|
392 things that other animals can do which are in some ways comparable,
|
rlm@57
|
393 and some ways superior to [things] that humans can do. In particular,
|
rlm@57
|
394 there are species of birds and also, I think, some rodents ---
|
rlm@57
|
395 squirrels, or something --- I don't know enough about the variety ---
|
rlm@57
|
396 that can hide nuts and remember where they've hidden them, and go back
|
rlm@57
|
397 to them. And there have been tests which show that some birds are able
|
rlm@57
|
398 to hide tens --- you know, [eighteen] or something nuts --- and to
|
rlm@57
|
399 remember which ones have been taken, which ones haven't, and so
|
rlm@57
|
400 on. And I suspect most humans can't do that. I wouldn't want to say
|
rlm@57
|
401 categorically that maybe we couldn't, because humans are very
|
rlm@57
|
402 [varied], and also [a few] people can develop particular competences
|
rlm@57
|
403 through training. But it's certainly not something I can do.
|
rlm@57
|
404
|
rlm@57
|
405
|
rlm@57
|
406 ** AI can be used to test philosophical theories
|
rlm@57
|
407 [23:01] But I also would like to say that I am not myself particularly
|
rlm@57
|
408 interested in trying to align animal intelligences according to any
|
rlm@57
|
409 kind of scale of superiority; I'm just trying to understand what it
|
rlm@57
|
410 was that biological evolution produced, and how it works, and I'm
|
rlm@57
|
411 interested in AI /mainly/ because I think that when one comes up with
|
rlm@57
|
412 theories about how these things work, one needs to have some way of
|
rlm@57
|
413 testing the theory. And AI provides ways of implementing and testing
|
rlm@57
|
414 theories that were not previously available: Immanuel Kant was trying
|
rlm@57
|
415 to come up with theories about how minds work, but he didn't have any
|
rlm@57
|
416 kind of a mechanism that he could build to test his theory about the
|
rlm@57
|
417 nature of mathematical knowledge, for instance, or how concepts were
|
rlm@57
|
418 developed from babyhood onward. Whereas now, if we do develop a
|
rlm@57
|
419 theory, we have a criterion of adequacy, namely it should be precise
|
rlm@57
|
420 enough and rich enough and detailed to enable a model to be
|
rlm@57
|
421 built. And then we can see if it works.
|
rlm@57
|
422
|
rlm@57
|
423 [24:07] If it works, it doesn't mean we've proved that the theory is
|
rlm@57
|
424 correct; it just shows it's a candidate. And if it doesn't work, then
|
rlm@57
|
425 it's not a candidate as it stands; it would need to be modified in
|
rlm@57
|
426 some way.
|
rlm@57
|
427
|
rlm@57
|
428 * Is abstract general intelligence feasible?
|
rlm@57
|
429
|
rlm@57
|
430 ** It's misleading to compare the brain and its neurons to a computer made of transistors
|
rlm@57
|
431 [24:27] I think there's a lot of optimism based on false clues:
|
rlm@57
|
432 the...for example, one of the false clues is to count the number of
|
rlm@57
|
433 neurons in the brain, and then talk about the number of transistors
|
rlm@57
|
434 you can fit into a computer or something, and then compare them. It
|
rlm@57
|
435 might turn out that the study of the way synapses work (which leads
|
rlm@57
|
436 some people to say that a typical synapse [] in the human brain has
|
rlm@57
|
437 computational power comparable to the Internet a few years ago,
|
rlm@57
|
438 because of the number of different molecules that are doing things,
|
rlm@57
|
439 the variety of types of things that are being done in those molecular
|
rlm@57
|
440 interactions, and the speed at which they happen, if you somehow count
|
rlm@57
|
441 up the number of operations per second or something, then you get
|
rlm@57
|
442 these comparable figures).
|
rlm@57
|
443
|
rlm@57
|
444 ** For example, brains may rely heavily on chemical information processing
|
rlm@57
|
445 Now even if the details aren't right, there may just be a lot of
|
rlm@57
|
446 information processing that...going on in brains at the /molecular/
|
rlm@57
|
447 level, not the neural level. Then, if that's the case, the processing
|
rlm@57
|
448 units will be orders of magnitude larger in number than the number of
|
rlm@57
|
449 neurons. And it's certainly the case that all the original biological
|
rlm@57
|
450 forms of information processing were chemical; there weren't brains
|
rlm@57
|
451 around, and still aren't in most microbes. And even when humans grow
|
rlm@57
|
452 their brains, the process of starting from a fertilized egg and
|
rlm@57
|
453 producing this rich and complex structure is, for much of the time,
|
rlm@57
|
454 under the control of chemical computations, chemical information
|
rlm@57
|
455 processing---of course combined with physical sorts of materials and
|
rlm@57
|
456 energy and so on as well.
|
rlm@57
|
457
|
rlm@57
|
458 [26:25] So it would seem very strange if all that capability was
|
rlm@57
|
459 something thrown away when you've got a brain and all the information
|
rlm@57
|
460 processing, the [challenges that were handled in making a brain],
|
rlm@57
|
461 ... This is handwaving on my part; I'm just saying that we /might/
|
rlm@57
|
462 learn that what brains do is not what we think they do, and that
|
rlm@57
|
463 problems of replicating them are not what we think they are, solely in
|
rlm@57
|
464 terms of numerical estimate of time scales, the number of components,
|
rlm@57
|
465 and so on.
|
rlm@57
|
466
|
rlm@57
|
467 ** Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations
|
rlm@57
|
468 [26:56] But apart from that, the other basis of skepticism concerns
|
rlm@57
|
469 how well we understand what the problems are. I think there are many
|
rlm@57
|
470 people who try to formalize the problems of designing an intelligent
|
rlm@57
|
471 system in terms of streams of information thought of as bit streams or
|
rlm@57
|
472 collections of bit streams, and they think of as the problems of
|
rlm@57
|
473 intelligence as being the construction or detection of patterns in
|
rlm@57
|
474 those, and perhaps not just detection of patterns, but detection of
|
rlm@57
|
475 patterns that are useable for sending /out/ streams to control motors
|
rlm@57
|
476 and so on in order to []. And that way of conceptualizing the problem
|
rlm@57
|
477 may lead on the one hand to oversimplification, so that the things
|
rlm@57
|
478 that /would/ be achieved, if those goals were achieved, maybe much
|
rlm@57
|
479 simpler, in some ways inadequate. Or the replication of human
|
rlm@57
|
480 intelligence, or the matching of human intelligence---or for that
|
rlm@57
|
481 matter, squirrel intelligence---but in another way, it may also make
|
rlm@57
|
482 the problem harder: it may be that some of the kinds of things that
|
rlm@57
|
483 biological evolution has achieved can't be done that way. And one of
|
rlm@57
|
484 the ways that might turn out to be the case is not because it's not
|
rlm@57
|
485 impossible in principle to do some of the information processing on
|
rlm@57
|
486 artificial computers-based-on-transistors and other bit-manipulating
|
rlm@57
|
487 []---but it may just be that the computational complexity of solving
|
rlm@57
|
488 problems, processes, or finding solutions to complex problems, are
|
rlm@57
|
489 much greater and therefore you might need a much larger universe than
|
rlm@57
|
490 we have available in order to do things.
|
rlm@57
|
491
|
rlm@57
|
492 ** Example: find the shortest path by dangling strings
|
rlm@57
|
493 [28:55] Then if the underlying mechanisms were different, the
|
rlm@57
|
494 information processing mechanisms, they might be better tailored to
|
rlm@57
|
495 particular sorts of computation. There's a [] example, which is
|
rlm@57
|
496 finding the shortest route if you've got a collection of roads, and
|
rlm@57
|
497 they may be curved roads, and lots of tangled routes from A to B to C,
|
rlm@57
|
498 and so on. And if you start at A and you want to get to Z --- a place
|
rlm@57
|
499 somewhere on that map --- the process of finding the shortest route
|
rlm@57
|
500 will involve searching through all these different possibilities and
|
rlm@57
|
501 rejecting some that are longer than others and so on. But if you make
|
rlm@57
|
502 a model of that map out of string, where these strings are all laid
|
rlm@57
|
503 out on the maps and so have the lengths of the routes. Then if you
|
rlm@57
|
504 hold the two knots in the string -- it's a network of string --- which
|
rlm@57
|
505 correspond to the start point and end point, then /pull/, then the
|
rlm@57
|
506 bits of string that you're left with in a straight line will give you
|
rlm@57
|
507 the shortest route, and that process of pulling just gets you the
|
rlm@57
|
508 solution very rapidly in a parallel computation, where all the others
|
rlm@57
|
509 just hang by the wayside, so to speak.
|
rlm@57
|
510
|
rlm@57
|
511 ** In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.
|
rlm@57
|
512 [30:15] Now, I'm not saying brains can build networks of string and
|
rlm@57
|
513 pull them or anything like that; that's just an illustration of how if
|
rlm@57
|
514 you have the right representation, correctly implemented---or suitably
|
rlm@57
|
515 implemented---for a problem, then you can avoid very combinatorially
|
rlm@57
|
516 complex searches, which will maybe grow exponentially with the number
|
rlm@57
|
517 of components in your map, whereas with this thing, the time it takes
|
rlm@57
|
518 won't depend on how many strings you've [got on the map]; you just
|
rlm@57
|
519 pull, and it will depend only on the shortest route that exists in
|
rlm@57
|
520 there. Even if that shortest route wasn't obvious on the original map.
|
rlm@57
|
521
|
rlm@57
|
522
|
rlm@57
|
523 [30:59] So that's a rather long-winded way of formulating the
|
rlm@57
|
524 conjecture which---of supporting, a roundabout way of supporting the
|
rlm@57
|
525 conjecture that there may be something about the way molecules perform
|
rlm@57
|
526 computations where they have the combination of continuous change as
|
rlm@57
|
527 things move through space and come together and move apart, and
|
rlm@57
|
528 whatever --- and also snap into states that then persist, so [as you
|
rlm@57
|
529 learn from] quantum mechanics, you can have stable molecular
|
rlm@57
|
530 structures which are quite hard to separate, and then in catalytic
|
rlm@57
|
531 processes you can separate them, or extreme temperatures, or strong
|
rlm@57
|
532 forces, but they may nevertheless be able to move very rapidly in some
|
rlm@57
|
533 conditions in order to perform computations.
|
rlm@57
|
534
|
rlm@57
|
535 [31:49] Now there may be things about that kind of structure that
|
rlm@57
|
536 enable searching for solutions to /certain/ classes of problems to be
|
rlm@57
|
537 done much more efficiently (by brain) than anything we could do with
|
rlm@57
|
538 computers. It's just an open question.
|
rlm@57
|
539
|
rlm@57
|
540 [32:04] So it /might/ turn out that we need new kinds of technology
|
rlm@57
|
541 that aren't on the horizon in order to replicate the functions that
|
rlm@57
|
542 animal brains perform ---or, it might not. I just don't know. I'm not
|
rlm@57
|
543 claiming that there's strong evidence for that; I'm just saying that
|
rlm@57
|
544 it might turn out that way, partly because I think we know less than
|
rlm@57
|
545 many people think we know about what biological evolution achieved.
|
rlm@57
|
546
|
rlm@57
|
547 [32:28] There are some other possibilities: we may just find out that
|
rlm@57
|
548 there are shortcuts no one ever thought of, and it will all happen
|
rlm@57
|
549 much more quickly---I have an open mind; I'd be surprised, but it
|
rlm@57
|
550 could turn up. There /is/ something that worries me much more than the
|
rlm@57
|
551 singularity that most people talk about, which is machines achieving
|
rlm@57
|
552 human-level intelligence and perhaps taking over [the] planet or
|
rlm@57
|
553 something. There's what I call the /singularity of cognitive
|
rlm@57
|
554 catch-up/ ...
|
rlm@57
|
555
|
rlm@57
|
556 * A singularity of cognitive catch-up
|
rlm@57
|
557
|
rlm@57
|
558 ** What if it will take a lifetime to learn enough to make something new?
|
rlm@57
|
559 ... SCC, singularity of cognitive catch-up, which I think we're close
|
rlm@57
|
560 to, or maybe have already reached---I'll explain what I mean by
|
rlm@57
|
561 that. One of the products of biological evolution---and this is one of
|
rlm@57
|
562 the answers to your earlier questions which I didn't get on to---is
|
rlm@57
|
563 that humans have not only the ability to make discoveries that none of
|
rlm@57
|
564 their ancestors have ever made, but to shorten the time required for
|
rlm@57
|
565 similar achievements to be reached by their offspring and their
|
rlm@57
|
566 descendants. So once we, for instance, worked out ways of complex
|
rlm@57
|
567 computations, or ways of building houses, or ways of finding our way
|
rlm@57
|
568 around, we don't need...our children don't need to work it out for
|
rlm@57
|
569 themselves by the same lengthy trial and error procedure; we can help
|
rlm@57
|
570 them get there much faster.
|
rlm@57
|
571
|
rlm@57
|
572 Okay, well, what I've been referring to as the singularity of
|
rlm@57
|
573 cognitive catch-up depends on the fact that---fairly obvious, and it's
|
rlm@57
|
574 often been commented on---that in case of humans, it's not necessary
|
rlm@57
|
575 for each generation to learn what previous generations learned /in the
|
rlm@57
|
576 same way/. And we can speed up learning once something has been
|
rlm@57
|
577 learned, [it is able to] be learned by new people. And that has meant
|
rlm@57
|
578 that the social processes that support that kind of education of the
|
rlm@57
|
579 young can enormously accelerate what would have taken...perhaps
|
rlm@57
|
580 thousands [or] millions of years for evolution to produce, can happen in
|
rlm@57
|
581 a much shorter time.
|
rlm@57
|
582
|
rlm@57
|
583
|
rlm@57
|
584 [34:54] But here's the catch: in order for a new advance to happen ---
|
rlm@57
|
585 so for something new to be discovered that wasn't there before, like
|
rlm@57
|
586 Newtonian mechanics, or the theory of relativity, or Beethoven's music
|
rlm@57
|
587 or [style] or whatever --- the individuals have to have traversed a
|
rlm@57
|
588 significant amount of what their ancestors have learned, even if they
|
rlm@57
|
589 do it much faster than their ancestors, to get to the point where they
|
rlm@57
|
590 can see the gaps, the possibilities for going further than their
|
rlm@57
|
591 ancestors, or their parents or whatever, have done.
|
rlm@57
|
592
|
rlm@57
|
593 [35:27] Now in the case of knowledge of science, mathematics,
|
rlm@57
|
594 philosophy, engineering and so on, there's been a lot of accumulated
|
rlm@57
|
595 knowledge. And humans are living a /bit/ longer than they used to, but
|
rlm@57
|
596 they're still living for [whatever it is], a hundred years, or for
|
rlm@57
|
597 most people, less than that. So you can imagine that there might come
|
rlm@57
|
598 a time when in a normal human lifespan, it's not possible for anyone
|
rlm@57
|
599 to learn enough to understand the scope and limits of what's already
|
rlm@57
|
600 been achieved in order to see the potential for going beyond it and to
|
rlm@57
|
601 build on what's already been done to make that...those future steps.
|
rlm@57
|
602
|
rlm@57
|
603 [36:10] So if we reach that stage, we will have reached the
|
rlm@57
|
604 singularity of cognitive catch-up because the process of education
|
rlm@57
|
605 that enables individuals to learn faster than their ancestors did is
|
rlm@57
|
606 the catching-up process, and it may just be that we at some point
|
rlm@57
|
607 reach a point where catching up can only happen within a lifetime of
|
rlm@57
|
608 an individual, and after that they're dead and they can't go
|
rlm@57
|
609 beyond. And I have some evidence that there's a lot of that around
|
rlm@57
|
610 because I see a lot of people coming up with what /they/ think of as
|
rlm@57
|
611 new ideas which they've struggled to come up with, but actually they
|
rlm@57
|
612 just haven't taken in some of what was...some of what was done [] by
|
rlm@57
|
613 other people, in other places before them. And I think that despite
|
rlm@57
|
614 the availability of search engines which make it /easier/ for people
|
rlm@57
|
615 to get the information---for instance, when I was a student, if I
|
rlm@57
|
616 wanted to find out what other people had done in the field, it was a
|
rlm@57
|
617 laborious process---going to the library, getting books, and
|
rlm@57
|
618 ---whereas now, I can often do things in seconds that would have taken
|
rlm@57
|
619 hours. So that means that if seconds [are needed] for that kind of
|
rlm@57
|
620 work, my lifespan has been extended by a factor of ten or
|
rlm@57
|
621 something. So maybe that /delays/ the singularity, but it may not
|
rlm@57
|
622 delay it enough. But that's an open question; I don't know. And it may
|
rlm@57
|
623 just be that in some areas, this is more of a problem than others. For
|
rlm@57
|
624 instance, it may be that in some kinds of engineering, we're handing
|
rlm@57
|
625 over more and more of the work to machines anyways and they can go on
|
rlm@57
|
626 doing it. So for instance, most of the production of computers now is
|
rlm@57
|
627 done by a computer-controlled machine---although some of the design
|
rlm@57
|
628 work is done by humans--- a lot of /detail/ of the design is done by
|
rlm@57
|
629 computers, and they produce the next generation, which then produces
|
rlm@57
|
630 the next generation, and so on.
|
rlm@57
|
631
|
rlm@57
|
632 [37:57] I don't know if humans can go on having major advances, so
|
rlm@57
|
633 it'll be kind of sad if we can't.
|
rlm@57
|
634
|
rlm@57
|
635 * Spatial reasoning: a difficult problem
|
rlm@57
|
636
|
rlm@57
|
637 [38:15] Okay, well, there are different problems [ ] mathematics, and
|
rlm@57
|
638 they have to do with properties. So for instance a lot of mathematics
|
rlm@57
|
639 that can be expressed in terms of logical structures or algebraic
|
rlm@57
|
640 structures and those are pretty well suited for manipulation and...on
|
rlm@57
|
641 computers, and if a problem can be specified using the
|
rlm@57
|
642 logical/algebraic notation, and the solution method requires creating
|
rlm@57
|
643 something in that sort of notation, then computers are pretty good,
|
rlm@57
|
644 and there are lots of mathematical tools around---there are theorem
|
rlm@57
|
645 provers and theorem checkers, and all kinds of things, which couldn't
|
rlm@57
|
646 have existed fifty, sixty years ago, and they will continue getting
|
rlm@57
|
647 better.
|
rlm@57
|
648
|
rlm@57
|
649
|
rlm@57
|
650 But there was something that I was [[example-gap][alluding to earlier]] when I gave the
|
rlm@57
|
651 example of how you can reason about what you will see by changing your
|
rlm@57
|
652 position in relation to a door, where what you are doing is using your
|
rlm@57
|
653 grasp of spatial structures and how as one spatial relationship
|
rlm@57
|
654 changes namely you come closer to the door or move sideways and
|
rlm@57
|
655 parallel to the wall or whatever, other spatial relationships change
|
rlm@57
|
656 in parallel, so the lines from your eyes through to other parts of
|
rlm@57
|
657 the...parts of the room on the other side of the doorway change,
|
rlm@57
|
658 spread out more as you go towards the doorway, and as you move
|
rlm@57
|
659 sideways, they don't spread out differently, but focus on different
|
rlm@57
|
660 parts of the internal ... that they access different parts of the
|
rlm@57
|
661 ... of the room.
|
rlm@57
|
662
|
rlm@57
|
663 Now, those are examples of ways of thinking about relationships and
|
rlm@57
|
664 changing relationships which are not the same as thinking about what
|
rlm@57
|
665 happens if I replace this symbol with that symbol, or if I substitute
|
rlm@57
|
666 this expression in that expression in a logical formula. And at the
|
rlm@57
|
667 moment, I do not believe that there is anything in AI amongst the
|
rlm@57
|
668 mathematical reasoning community, the theorem-proving community, that
|
rlm@57
|
669 can model the processes that go on when a young child starts learning
|
rlm@57
|
670 to do Euclidean geometry and is taught things about---for instance, I
|
rlm@57
|
671 can give you a proof that the angles of any triangle add up to a
|
rlm@57
|
672 straight line, 180 degrees.
|
rlm@57
|
673
|
rlm@57
|
674 ** Example: Spatial proof that the angles of any triangle add up to a half-circle
|
rlm@57
|
675 There are standard proofs which involves starting with one triangle,
|
rlm@57
|
676 then adding a line parallel to the base one of my former students,
|
rlm@57
|
677 Mary Pardoe, came up with which I will demonstrate with this <he holds
|
rlm@57
|
678 up a pen> --- can you see it? If I have a triangle here that's got
|
rlm@57
|
679 three sides, if I put this thing on it, on one side --- let's say the
|
rlm@57
|
680 bottom---I can rotate it until it lies along the second...another
|
rlm@57
|
681 side, and then maybe move it up to the other end ~. Then I can rotate
|
rlm@57
|
682 it again, until it lies on the third side, and move it back to the
|
rlm@57
|
683 other end. And then I'll rotate it again and it'll eventually end up
|
rlm@57
|
684 on the original side, but it will have changed the direction it's
|
rlm@57
|
685 pointing in --- and it won't have crossed over itself so it will have
|
rlm@57
|
686 gone through a half-circle, and that says that the three angles of a
|
rlm@57
|
687 triangle add up to the rotations of half a circle, which is a
|
rlm@57
|
688 beautiful kind of proof and almost anyone can understand it. Some
|
rlm@57
|
689 mathematicians don't like it, because they say it hides some of the
|
rlm@57
|
690 assumptions, but nevertheless, as far as I'm concerned, it's an
|
rlm@57
|
691 example of a human ability to do reasoning which, once you've
|
rlm@57
|
692 understood it, you can see will apply to any triangle --- it's got to
|
rlm@57
|
693 be a planar triangle --- not a triangle on a globe, because then the
|
rlm@57
|
694 angles can add up to more than ... you can have three /right/ angles
|
rlm@57
|
695 if you have an equator...a line on the equator, and a line going up to
|
rlm@57
|
696 to the north pole of the earth, and then you have a right angle and
|
rlm@57
|
697 then another line going down to the equator, and you have a right
|
rlm@57
|
698 angle, right angle, right angle, and they add up to more than a
|
rlm@57
|
699 straight line. But that's because the triangle isn't in the plane,
|
rlm@57
|
700 it's on a curved surface. In fact, that's one of the
|
rlm@57
|
701 differences...definitional differences you can take between planar and
|
rlm@57
|
702 curved surfaces: how much the angles of a triangle add up to. But our
|
rlm@57
|
703 ability to /visualize/ and notice the generality in that process, and
|
rlm@57
|
704 see that you're going to be able to do the same thing using triangles
|
rlm@57
|
705 that stretch in all sorts of ways, or if it's a million times as
|
rlm@57
|
706 large, or if it's made...you know, written on, on...if it's drawn in
|
rlm@57
|
707 different colors or whatever --- none of that's going to make any
|
rlm@57
|
708 difference to the essence of that process. And that ability to see
|
rlm@57
|
709 the commonality in a spatial structure which enables you to draw some
|
rlm@57
|
710 conclusions with complete certainty---subject to the possibility that
|
rlm@57
|
711 sometimes you make mistakes, but when you make mistakes, you can
|
rlm@57
|
712 discover them, as has happened in the history of geometrical theorem
|
rlm@57
|
713 proving. Imre Lakatos had a wonderful book called [[http://en.wikipedia.org/wiki/Proofs_and_Refutations][/Proofs and
|
rlm@57
|
714 Refutations/]] --- which I won't try to summarize --- but he has
|
rlm@57
|
715 examples: mistakes were made; that was because people didn't always
|
rlm@57
|
716 realize there were subtle subcases which had slightly different
|
rlm@57
|
717 properties, and they didn't take account of that. But once they're
|
rlm@57
|
718 noticed, you rectify that.
|
rlm@57
|
719
|
rlm@57
|
720 ** Geometric results are fundamentally different than experimental results in chemistry or physics.
|
rlm@57
|
721 [43:28] But it's not the same as doing experiments in chemistry and
|
rlm@57
|
722 physics, where you can't be sure it'll be the same on [] or at a high
|
rlm@57
|
723 temperature, or in a very strong magnetic field --- with geometric
|
rlm@57
|
724 reasoning, in some sense you've got the full information in front of
|
rlm@57
|
725 you; even if you don't always notice an important part of it. So, that
|
rlm@57
|
726 kind of reasoning (as far as I know) is not implemented anywhere in a
|
rlm@57
|
727 computer. And most people who do research on trying to model
|
rlm@57
|
728 mathematical reasoning, don't pay any attention to that, because of
|
rlm@57
|
729 ... they just don't think about it. They start from somewhere else,
|
rlm@57
|
730 maybe because of how they were educated. I was taught Euclidean
|
rlm@57
|
731 geometry at school. Were you?
|
rlm@57
|
732
|
rlm@57
|
733 (Adam ford: Yeah)
|
rlm@57
|
734
|
rlm@57
|
735 Many people are not now. Instead they're taught set theory, and
|
rlm@57
|
736 logic, and arithmetic, and [algebra], and so on. And so they don't use
|
rlm@57
|
737 that bit of their brains, without which we wouldn't have built any of
|
rlm@57
|
738 the cathedrals, and all sorts of things we now depend on.
|
rlm@57
|
739
|
rlm@57
|
740 * Is near-term artificial general intelligence likely?
|
rlm@57
|
741
|
rlm@57
|
742 ** Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.
|
rlm@57
|
743
|
rlm@57
|
744 [44:35] Well, this relates to what's meant by general. And when I
|
rlm@57
|
745 first encountered the AGI community, I thought that what they all
|
rlm@57
|
746 meant by general intelligence was /uniform/ intelligence ---
|
rlm@57
|
747 intelligence based on some common simple (maybe not so simple, but)
|
rlm@57
|
748 single powerful mechanism or principle of inference. And there are
|
rlm@57
|
749 some people in the community who are trying to produce things like
|
rlm@57
|
750 that, often in connection with algorithmic information theory and
|
rlm@57
|
751 computability of information, and so on. But there's another sense of
|
rlm@57
|
752 general which means that the system of general intelligence can do
|
rlm@57
|
753 lots of different things, like perceive things, understand language,
|
rlm@57
|
754 move around, make things, and so on --- perhaps even enjoy a joke;
|
rlm@57
|
755 that's something that's not nearly on the horizon, as far as I
|
rlm@57
|
756 know. Enjoying a joke isn't the same as being able to make laughing
|
rlm@57
|
757 noises.
|
rlm@57
|
758
|
rlm@57
|
759 Given, then, that there are these two notions of general
|
rlm@57
|
760 intelligence---there's one that looks for one uniform, possibly
|
rlm@57
|
761 simple, mechanism or collection of ideas and notations and algorithms,
|
rlm@57
|
762 that will deal with any problem that's solvable --- and the other
|
rlm@57
|
763 that's general in the sense that it can do lots of different things
|
rlm@57
|
764 that are combined into an integrated architecture (which raises lots
|
rlm@57
|
765 of questions about how you combine these things and make them work
|
rlm@57
|
766 together) and we humans, certainly, are of the second kind: we do all
|
rlm@57
|
767 sorts of different things, and other animals also seem to be of the
|
rlm@57
|
768 second kind, perhaps not as general as humans. Now, it may turn out
|
rlm@57
|
769 that in some near future time, who knows---decades, a few
|
rlm@57
|
770 decades---you'll be able to get machines that are capable of solving
|
rlm@57
|
771 in a time that will depend on the nature of the problem, but any
|
rlm@57
|
772 problem that is solvable, and they will be able to do it in some sort
|
rlm@57
|
773 of tractable time --- of course, there are some problems that are
|
rlm@57
|
774 solvable that would require a larger universe and a longer history
|
rlm@57
|
775 than the history of the universe, but apart from that constraint,
|
rlm@57
|
776 these machines will be able to do anything []. But to be able to do
|
rlm@57
|
777 some of the kinds of things that humans can do, like the kinds of
|
rlm@57
|
778 geometrical reasoning where you look at the shape and you abstract
|
rlm@57
|
779 away from the precise angles and sizes and shapes and so on, and
|
rlm@57
|
780 realize there's something general here, as must have happened when our
|
rlm@57
|
781 ancestors first made the discoveries that eventually put together in
|
rlm@57
|
782 Euclidean geometry.
|
rlm@57
|
783
|
rlm@57
|
784 It may be that that requires mechanisms of a kind that we don't know
|
rlm@57
|
785 anything about at the moment. Maybe brains are using molecules and
|
rlm@57
|
786 rearranging molecules in some way that supports that kind of
|
rlm@57
|
787 reasoning. I'm not saying they are --- I don't know, I just don't see
|
rlm@57
|
788 any simple...any obvious way to map that kind of reasoning capability
|
rlm@57
|
789 onto what we currently do on computers. There is---and I just
|
rlm@57
|
790 mentioned this briefly beforehand---there is a kind of thing that's
|
rlm@57
|
791 sometimes thought of as a major step in that direction, namely you can
|
rlm@57
|
792 build a machine (or a software system) that can represent some
|
rlm@57
|
793 geometrical structure, and then be told about some change that's going
|
rlm@57
|
794 to happen to it, and it can predict in great detail what'll
|
rlm@57
|
795 happen. And this happens for instance in game engines, where you say
|
rlm@57
|
796 we have all these blocks on the table and I'll drop one other block,
|
rlm@57
|
797 and then [the thing] uses Newton's laws and properties of rigidity of
|
rlm@57
|
798 the parts and the elasticity and also stuff about geometries and space
|
rlm@57
|
799 and so on, to give you a very accurate representation of what'll
|
rlm@57
|
800 happen when this brick lands on this pile of things, [it'll bounce and
|
rlm@57
|
801 go off, and so on]. And you just, with more memory and more CPU power,
|
rlm@57
|
802 you can increase the accuracy--- but that's totally different than
|
rlm@57
|
803 looking at /one/ example, and working out what will happen in a whole
|
rlm@57
|
804 /range/ of cases at a higher level of abstraction, whereas the game
|
rlm@57
|
805 engine does it in great detail for /just/ this case, with /just/ those
|
rlm@57
|
806 precise things, and it won't even know what the generalizations are
|
rlm@57
|
807 that it's using that would apply to others []. So, in that sense, [we]
|
rlm@57
|
808 may get AGI --- artificial general intelligence --- pretty soon, but
|
rlm@57
|
809 it'll be limited in what it can do. And the other kind of general
|
rlm@57
|
810 intelligence which combines all sorts of different things, including
|
rlm@57
|
811 human spatial geometrical reasoning, and maybe other things, like the
|
rlm@57
|
812 ability to find things funny, and to appreciate artistic features and
|
rlm@57
|
813 other things may need forms of pattern-mechanism, and I have an open
|
rlm@57
|
814 mind about that.
|
rlm@57
|
815
|
rlm@57
|
816 * Abstract General Intelligence impacts
|
rlm@57
|
817
|
rlm@57
|
818 [49:53] Well, as far as the first type's concerned, it could be useful
|
rlm@57
|
819 for all kinds of applications --- there are people who worry about
|
rlm@57
|
820 where there's a system that has that type of intelligence, might in
|
rlm@57
|
821 some sense take over control of the planet. Well, humans often do
|
rlm@57
|
822 stupid things, and they might do something stupid that would lead to
|
rlm@57
|
823 disaster, but I think it's more likely that there would be other
|
rlm@57
|
824 things [] lead to disaster--- population problems, using up all the
|
rlm@57
|
825 resources, destroying ecosystems, and whatever. But certainly it would
|
rlm@57
|
826 go on being useful to have these calculating devices. Now, as for the
|
rlm@57
|
827 second kind of them, I don't know---if we succeeded at putting
|
rlm@57
|
828 together all the parts that we find in humans, we might just make an
|
rlm@57
|
829 artificial human, and then we might have some of them as your friends,
|
rlm@57
|
830 and some of them we might not like, and some of them might become
|
rlm@57
|
831 teachers or whatever, composers --- but that raises a question: could
|
rlm@57
|
832 they, in some sense, be superior to us, in their learning
|
rlm@57
|
833 capabilities, their understanding of human nature, or maybe their
|
rlm@57
|
834 wickedness or whatever --- these are all issues in which I expect the
|
rlm@57
|
835 best science fiction writers would give better answers than anything I
|
rlm@57
|
836 could do, but I did once fantasize when I [back] in 1978, that perhaps
|
rlm@57
|
837 if we achieved that kind of thing, that they would be wise, and gentle
|
rlm@57
|
838 and kind, and realize that humans are an inferior species that, you
|
rlm@57
|
839 know, have some good features, so they'd keep us in some kind of
|
rlm@57
|
840 secluded...restrictive kind of environment, keep us away from
|
rlm@57
|
841 dangerous weapons, and so on. And find ways of cohabitating with
|
rlm@57
|
842 us. But that's just fantasy.
|
rlm@57
|
843
|
rlm@57
|
844 Adam Ford: Awesome. Yeah, there's an interesting story /With Folded
|
rlm@57
|
845 Hands/ where [the computers] want to take care of us and want to
|
rlm@57
|
846 reduce suffering and end up lobotomizing everybody [but] keeping them
|
rlm@57
|
847 alive so as to reduce the suffering.
|
rlm@57
|
848
|
rlm@57
|
849 Aaron Sloman: Not all that different from /Brave New World/, where it
|
rlm@57
|
850 was done with drugs and so on, but different humans are given
|
rlm@57
|
851 different roles in that system, yeah.
|
rlm@57
|
852
|
rlm@57
|
853 There's also /The Time Machine/, H.G. Wells, where the ... in the
|
rlm@57
|
854 distant future, humans have split in two: the Eloi, I think they were
|
rlm@57
|
855 called, they lived underground, they were the [] ones, and then---no,
|
rlm@57
|
856 the Morlocks lived underground; Eloi lived on the planet; they were
|
rlm@57
|
857 pleasant and pretty but not very bright, and so on, and they were fed
|
rlm@57
|
858 on by ...
|
rlm@57
|
859
|
rlm@57
|
860 Adam Ford: [] in the future.
|
rlm@57
|
861
|
rlm@57
|
862 Aaron Sloman: As I was saying, if you ask science fiction writers,
|
rlm@57
|
863 you'll probably come up with a wide variety of interesting answers.
|
rlm@57
|
864
|
rlm@57
|
865 Adam Ford: I certainly have; I've spoken to [] of Birmingham, and
|
rlm@57
|
866 Sean Williams, ... who else?
|
rlm@57
|
867
|
rlm@57
|
868 Aaron Sloman: Did you ever read a story by E.M. Forrester called /The
|
rlm@57
|
869 Machine Stops/ --- very short story, it's [[http://archive.ncsa.illinois.edu/prajlich/forster.html][on the Internet somewhere]]
|
rlm@57
|
870 --- it's about a time when people sitting ... and this was written in
|
rlm@57
|
871 about [1914 ] so it's about...over a hundred years ago ... people are
|
rlm@57
|
872 in their rooms, they sit in front of screens, and they type things,
|
rlm@57
|
873 and they communicate with one another that way, and they don't meet;
|
rlm@57
|
874 they have debates, and they give lectures to their audiences that way,
|
rlm@57
|
875 and then there's a woman whose son says \ldquo{}I'd like to see
|
rlm@57
|
876 you\rdquo{} and she says \ldquo{}What's the point? You've got me at
|
rlm@57
|
877 this point \rdquo{} but he wants to come and talk to her --- I won't
|
rlm@57
|
878 tell you how it ends, but.
|
rlm@57
|
879
|
rlm@57
|
880 Adam Ford: Reminds me of the Internet.
|
rlm@57
|
881
|
rlm@57
|
882 Aaron Sloman: Well, yes; he invented ... it was just extraordinary
|
rlm@57
|
883 that he was able to do that, before most of the components that we
|
rlm@57
|
884 need for it existed.
|
rlm@57
|
885
|
rlm@57
|
886 Adam Ford: [Another person who did that] was Vernor Vinge [] /True
|
rlm@57
|
887 Names/.
|
rlm@57
|
888
|
rlm@57
|
889 Aaron Sloman: When was that written?
|
rlm@57
|
890
|
rlm@57
|
891 Adam Ford: The seventies.
|
rlm@57
|
892
|
rlm@57
|
893 Aaron Sloman: Okay, well a lot of the technology was already around
|
rlm@57
|
894 then. The original bits of internet were working, in about 1973, I was
|
rlm@57
|
895 sitting ... 1974, I was sitting at Sussex University trying to
|
rlm@57
|
896 use...learn LOGO, the programming language, to decide whether it was
|
rlm@57
|
897 going to be useful for teaching AI, and I was sitting [] paper
|
rlm@57
|
898 teletype, there was paper coming out, transmitting ten characters a
|
rlm@57
|
899 second from Sussex to UCL computer lab by telegraph cable, from there
|
rlm@57
|
900 to somewhere in Norway via another cable, from there by satellite to
|
rlm@57
|
901 California to a computer Xerox [] research center where they had
|
rlm@57
|
902 implemented a computer with a LOGO system on it, with someone I had
|
rlm@57
|
903 met previously in Edinburgh, Danny Bobrow, and he allowed me to have
|
rlm@57
|
904 access to this sytem. So there I was typing. And furthermore, it was
|
rlm@57
|
905 duplex typing, so every character I typed didn't show up on my
|
rlm@57
|
906 terminal until it had gone all the way there and echoed back, so I
|
rlm@57
|
907 would type, and the characters would come back four seconds later.
|
rlm@57
|
908
|
rlm@57
|
909 [55:26] But that was the Internet, and I think Vernor Vinge was
|
rlm@57
|
910 writing after that kind of thing had already started, but I don't
|
rlm@57
|
911 know. Anyway.
|
rlm@57
|
912
|
rlm@57
|
913 [55:41] Another...I mentioned H.G. Wells, /The Time Machine/. I
|
rlm@57
|
914 recently discovered, because [[http://en.wikipedia.org/wiki/David_Lodge_(author)][David Lodge]] had written a sort of
|
rlm@57
|
915 semi-novel about him, that he had invented Wikipedia, in advance--- he
|
rlm@57
|
916 had this notion of an encyclopedia that was free to everybody, and
|
rlm@57
|
917 everybody could contribute and [collaborate on it]. So, go to the
|
rlm@57
|
918 science fiction writers to find out the future --- well, a range of
|
rlm@57
|
919 possible futures.
|
rlm@57
|
920
|
rlm@57
|
921 Adam Ford: Well the thing is with science fiction writers, they have
|
rlm@57
|
922 to maintain some sort of interest for their readers, after all the
|
rlm@57
|
923 science fiction which reaches us is the stuff that publishers want to
|
rlm@57
|
924 sell, and so there's a little bit of a ... a bias towards making a
|
rlm@57
|
925 plot device there, and so the dramatic sort of appeals to our
|
rlm@57
|
926 amygdala, our lizard brain; we'll sort of stay there obviously to some
|
rlm@57
|
927 extent. But I think that they do come up with sort of amazing ideas; I
|
rlm@57
|
928 think it's worth trying to make these predictions; I think that we
|
rlm@57
|
929 should more time on strategic forecasting, I mean take that seriously.
|
rlm@57
|
930
|
rlm@57
|
931 Aaron Sloman: Well, I'm happy to leave that to others; I just want to
|
rlm@57
|
932 try to understand these problems that bother me about how things
|
rlm@57
|
933 work. And it may be that some would say that's irresponsible if I
|
rlm@57
|
934 don't think about what the implications will be. Well, understanding
|
rlm@57
|
935 how humans work /might/ enable us to make [] humans --- I suspect it
|
rlm@57
|
936 wont happen in this century; I think it's going to be too difficult.
|