view org/sloman.org @ 163:3547c072c29f

more info for the cryonics links.
author Robert McIntyre <rlm@mit.edu>
date Sun, 24 Apr 2016 21:52:01 -0700
parents 414a10d51d9f
children
line wrap: on
line source
1 #+TITLE:Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview
2 #+AUTHOR:Dylan Holmes
3 #+EMAIL:
4 #+STYLE: <link rel="stylesheet" type="text/css" href="../css/sloman.css" />
7 #+begin_quote
8 *Update* (13 Oct): Aaron Sloman has produced an improved version of
9 this transcript, which includes follow-up thoughts and links to
10 related works. It is available on his website here:
11 [[http://www.cs.bham.ac.uk/research/projects/cogaff/movies/transcript-interview.html]].
14 This draft will remain available here for historical purposes.
15 #+end_quote
17 #+BEGIN_QUOTE
20 *Editor's note:* This is a working draft transcript which I made of
21 [[http://www.youtube.com/watch?feature=player_detailpage&v=iuH8dC7Snno][this nice interview]] of Aaron Sloman. Having just finished one
22 iteration of transcription, I still need to go in and clean up the
23 formatting and fix the parts that I misheard, so you can expect the
24 text to improve significantly in the near future.
26 To the extent that this is my work, you have my permission to make
27 copies of this transcript for your own purposes. Also, feel free to
28 e-mail me with comments or corrections.
30 (Addendum: This transcription is licensed by Aaron Sloman and Dylan Holmes, as
31 indicated here:
32 http://www.cs.bham.ac.uk/research/projects/cogaff/movies/transcript-interview.html#license)
35 You can send mail to =transcript@aurellem.org=.
37 Cheers,
39 ---Dylan
40 #+END_QUOTE
45 * Introduction
47 ** Aaron Sloman evolves into a philosopher of AI
48 [0:09] My name is Aaron Sloman. My first degree many years ago in
49 Capetown University was in Physics and Mathematics, and I intended to
50 go and be a mathematician. I came to Oxford and encountered
51 philosophers --- I had started reading philosophy and discussing
52 philosophy before then, and then I found that there were philosophers
53 who said things about mathematics that I thought were wrong, so
54 gradually got more and more involved in [philosophy] discussions and
55 switched to doing philosophy DPhil. Then I became a philosophy
56 lecturer and about six years later, I was introduced to artificial
57 intelligence when I was a lecturer at Sussex University in philosophy
58 and I very soon became convinced that the best way to make progress in
59 both areas of philosophy (including philosophy of mathematics which I
60 felt i hadn't dealt with adequately in my DPhil) about the philosophy
61 of mathematics, philosophy of mind, philsophy of language and all
62 those things---the best way was to try to design and test working
63 fragments of mind and maybe eventually put them all together but
64 initially just working fragments that would do various things.
66 [1:12] And I learned to program and ~ with various other people
67 including ~Margaret Boden whom you've interviewed, developed---helped
68 develop an undergraduate degree in AI and other things and also began
69 to do research in AI and so on which I thought of as doing philosophy,
70 primarily.
72 [1:29] And then I later moved to the University of Birmingham and I
73 was there --- I came in 1991 --- and I've been retired for a while but
74 I'm not interested in golf or gardening so I just go on doing full
75 time research and my department is happy to keep me on without paying
76 me and provide space and resources and I come, meeting bright people
77 at conferences and try to learn and make progress if I can.
79 ** AI is hard, in part because there are tempting non-problems.
81 One of the things I learnt and understood more and more over the many
82 years --- forty years or so since I first encountered AI --- is how
83 hard the problems are, and in part that's because it's very often
84 tempting to /think/ the problem is something different from what it
85 actually is, and then people design solutions to the non-problems, and
86 I think of most of my work now as just helping to clarify what the
87 problems are: what is it that we're trying to explain --- and maybe
88 this is leading into what you wanted to talk about:
90 I now think that one of the ways of getting a deep understanding of
91 that is to find out what were the problems that biological evolution
92 solved, because we are a product of /many/ solutions to /many/
93 problems, and if we just try to go in and work out what the whole
94 system is doing, we may get it all wrong, or badly wrong.
97 * What problems of intelligence did evolution solve?
99 ** Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.
101 [2:57] Well, first I would challenge that we are the dominant
102 species. I know it looks like that but actually if you count biomass,
103 if you count number of species, if you count number of individuals,
104 the dominant species are microbes --- maybe not one of them but anyway
105 they're the ones who dominate in that sense, and furthermore we are
106 mostly --- we are largely composed of microbes, without which we
107 wouldn't survive.
110 # ** Many nonlinguistic competences require sophisticated internal representations
111 [3:27] But there are things that make humans (you could say) best at
112 those things, or worst at those things, but it's a combination. And I
113 think it was a collection of developments of which there isn't any
114 single one. [] there might be, some people say, human language which
115 changed everything. By our human language, they mean human
116 communication in words, but I think that was a later development from
117 what must have started as the use of /internal/ forms of
118 representation --- which are there in nest-building birds, in
119 pre-verbal children, in hunting mammals --- because you can't take in
120 information about a complex structured environment in which things can
121 change and you may have to be able to work out what's possible and
122 what isn't possible, without having some way of representing the
123 components of the environment, their relationships, the kinds of
124 things they can and can't do, the kinds of things you might or might
125 not be able to do --- and /that/ kind of capability needs internal
126 languages, and I and colleagues [at Birmingham] have been referring to
127 them as generalized languages because some people object to
128 referring...to using language to refer to something that isn't used
129 for communication. But from that viewpoint, not only humans but many
130 other animals developed abilities to do things to their environment to
131 make them more friendly to themselves, which depended on being able to
132 represent possible futures, possible actions, and work out what's the
133 best thing to do.
135 [5:13] And nest-building in corvids for instance---crows, magpies,
136 [hawks], and so on --- are way beyond what current robots can do, and
137 in fact I think most humans would be challenged if they had to go and
138 find a collection of twigs, one at a time, maybe bring them with just
139 one hand --- or with your mouth --- and assemble them into a
140 structure that, you know, is shaped like a nest, and is fairly rigid,
141 and you could trust your eggs in them when wind blows. But they're
142 doing it, and so ... they're not our evolutionary ancestors, but
143 they're an indication --- and that example is an indication --- of
144 what must have evolved in order to provide control over the
145 environment in /that/ species.
147 ** Speculation about how communication might have evolved from internal lanagues.
148 [5:56] And I think hunting mammals, fruit-picking mammals, mammals
149 that can rearrange parts of the environment, provide shelters, needed
150 to have .... also needed to have ways of representing possible
151 futures, not just what's there in the environment. I think at a later
152 stage, that developed into a form of communication, or rather the
153 /internal/ forms of representation became usable as a basis for
154 providing [context] to be communicated. And that happened, I think,
155 initially through performing actions that expressed intentions, and
156 probably led to situtations where an action (for instance, moving some
157 large object) was performed more easily, or more successfully, or more
158 accurately if it was done collaboratively. So someone who had worked
159 out what to do might start doing it, and then a conspecific might be
160 able to work out what the intention is, because that person has the
161 /same/ forms of representation and can build theories about what's
162 going on, and might then be able to help.
164 [7:11] You can imagine that if that started happening more (a lot of
165 collaboration based on inferred intentions and plans) then sometimes
166 the inferences might be obscure and difficult, so the /actions/ might
167 be enhanced to provide signals as to what the intention is, and what
168 the best way is to help, and so on.
170 [7:35] So, this is all handwaving and wild speculation, but I think
171 it's consistent with a large collection of facts which one can look at
172 --- and find if one looks for them, but one won't know if [some]one
173 doesn't look for them --- about the way children, for instance, who
174 can't yet talk, communicate, and the things they'll do, like going to
175 the mother and turning the face to point in the direction where the
176 child wants it to look and so on; that's an extreme version of action
177 indicating intention.
179 [8:03] Anyway. That's a very long roundabout answer to one conjecture
180 that the use of communicative language is what gave humans their
181 unique power to create and destroy and whatever, and I'm saying that
182 if by that you mean /communicative/ language, then I'm saying there
183 was something before that which was /non/-communicative language, and I
184 suspect that noncommunicative language continues to play a deep role
185 in /all/ human perception ---in mathematical and scientific reasoning, in
186 problem solving --- and we don't understand very much about it.
188 [8:48]
189 I'm sure there's a lot more to be said about the development of
190 different kinds of senses, the development of brain structures and
191 mechanisms is above all that, but perhaps I've droned on long enough
192 on that question.
195 * How do language and internal states relate to AI?
197 [9:09] Well, I think most of the human and animal capabilities that
198 I've been referring to are not yet to be found in current robots or
199 [computing] systems, and I think there are two reasons for that: one
200 is that it's intrinsically very difficult; I think that in particular
201 it may turn out that the forms of information processing that one can
202 implement on digital computers as we currently know them may not be as
203 well suited to performing some of these tasks as other kinds of
204 computing about which we don't know so much --- for example, I think
205 there may be important special features about /chemical/ computers
206 which we might [talk about in a little bit? find out about].
208 ** In AI, false assumptions can lead investigators astray.
209 [9:57] So, one of the problems then is that the tasks are hard ... but
210 there's a deeper problem as to why AI hasn't made a great deal of
211 progress on these problems that I'm talking about, and that is that
212 most AI researchers assume things---and this is not just AI
213 researchers, but [also] philsophers, and psychologists, and people
214 studying animal behavior---make assumptions about what it is that
215 animals or humans do, for instance make assumptions about what vision
216 is for, or assumptions about what motivation is and how motivation
217 works, or assumptions about how learning works, and then they try ---
218 the AI people try --- to model [or] build systems that perform those
219 assumed functions. So if you get the /functions/ wrong, then even if
220 you implement some of the functions that you're trying to implement,
221 they won't necessarily perform the tasks that the initial objective
222 was to imitate, for instance the tasks that humans, and nest-building
223 birds, and monkeys and so on can perform.
225 ** Example: Vision is not just about finding surfaces, but about finding affordances.
226 [11:09] I'll give you a simple example --- well, maybe not so simple,
227 but --- It's often assumed that the function of vision in humans (and
228 in other animals with good eyesight and so on) is to take in optical
229 information that hits the retina, and form into the (maybe changing
230 --- or, really, in our case definitely changing) patterns of
231 illumination where there are sensory receptors that detect those
232 patterns, and then somehow from that information (plus maybe other
233 information gained from head movement or from comparisons between two
234 eyes) to work out what there was in the environment that produced
235 those patterns, and that is often taken to mean \ldquo{}where were the
236 surfaces off which the light bounced before it came to me\rdquo{}. So
237 you essentially think of the task of the visual system as being to
238 reverse the image formation process: so the 3D structure's there, the
239 lens causes the image to form in the retina, and then the brain goes
240 back to a model of that 3D structure there. That's a very plausible
241 theory about vision, and it may be that that's a /subset/ of what
242 human vision does, but I think James Gibson pointed out that that kind
243 of thing is not necessarily going to be very useful for an organism,
244 and it's very unlikely that that's the main function of perception in
245 general, namely to produce some physical description of what's out
246 there.
248 [12:37] What does an animal /need/? It needs to know what it can do,
249 what it can't do, what the consequences of its actions will be
250 .... so, he introduced the word /affordance/, so from his point of
251 view, the function of vision, perception, are to inform the organism
252 of what the /affordances/ are for action, where that would mean what
253 the animal, /given/ its morphology (what it can do with its mouth, its
254 limbs, and so on, and the ways it can move) what it can do, what its
255 needs are, what the obstacles are, and how the environment supports or
256 obstructs those possible actions.
258 [13:15] And that's a very different collection of information
259 structures that you need from, say, \ldquo{}where are all the
260 surfaces?\rdquo{}: if you've got all the surfaces, /deriving/ the
261 affordances would still be a major task. So, if you think of the
262 perceptual system as primarily (for biological organisms) being
263 devices that provide information about affordances and so on, then the
264 tasks look very different. And most of the people working, doing
265 research on computer vision in robots, I think haven't taken all that
266 on board, so they're trying to get machines to do things which, even
267 if they were successful, would not make the robots very intelligent
268 (and in fact, even the ones they're trying to do are not really easy
269 to do, and they don't succeed very well--- although, there's progress;
270 I shouldn't disparage it too much.)
272 ** Online and offline intelligence
274 [14:10] It gets more complex as animals get more sophisticated. So, I
275 like to make a distinction between online intelligence and offline
276 intelligence. So, for example, if I want to pick something up --- like
277 this leaf <he plucks a leaf from the table> --- I was able to select
278 it from all the others in there, and while moving my hand towards it,
279 I was able to guide its trajectory, making sure it was going roughly
280 in the right direction --- as opposed to going out there, which
281 wouldn't have been able to pick it up --- and these two fingers ended
282 up with a portion of the leaf between them, so that I was able to tell
283 when I'm ready to do that <he clamps the leaf between two fingers>
284 and at that point, I clamped my fingers and then I could pick up the
285 leaf.
287 [14:54] Whereas, --- and that's an example of online intelligence:
288 during the performance of an action (both from the stage where it's
289 initiated, and during the intermediate stages, and where it's
290 completed) I'm taking in information relevant to controlling all those
291 stages, and that relevant information keeps changing. That means I
292 need stores of transient information which gets discarded almost
293 immediately and replaced or something. That's online intelligence. And
294 there are many forms; that's just one example, and Gibson discussed
295 quite a lot of examples which I won't try to replicate now.
297 [15:30] But in offline intelligence, you're not necessarily actually
298 /performing/ the actions when you're using your intelligence; you're
299 thinking about /possible/ actions. So, for instance, I could think
300 about how fast or by what route I would get back to the lecture room
301 if I wanted to [get to the next talk] or something. And I know where
302 the door is, roughly speaking, and I know roughly which route I would
303 take, when I go out, I should go to the left or to the right, because
304 I've stored information about where the spaces are, where the
305 buildings are, where the door was that we came out --- but in using
306 that information to think about that route, I'm not actually
307 performing the action. I'm not even /simulating/ it in detail: the
308 precise details of direction and speed and when to clamp my fingers,
309 or when to contract my leg muscles when walking, are all irrelevant to
310 thinking about a good route, or thinking about the potential things
311 that might happen on the way. Or what would be a good place to meet
312 someone who I think [for an acquaintance in particular] --- [barber]
313 or something --- I don't necessarily have to work out exactly /where/
314 the person's going to stand, or from what angle I would recognize
315 them, and so on.
317 [16:46] So, offline intelligence --- which I think became not just a
318 human competence; I think there are other animals that have aspects of
319 it: Squirrels are very impressive as you watch them. Gray squirrels at
320 any rate, as you watch them defeating squirrel-proof birdfeeders, seem
321 to have a lot of that [offline intelligence], as well as the online
322 intelligence when they eventually perform the action they've worked
323 out [] that will get them to the nuts.
325 [17:16] And I think that what happened during our evolution is that
326 mechanisms for acquiring and processing and storing and manipulating
327 information that is more and more remote from the performance of
328 actions developed. An example is taking in information about where
329 locations are that you might need to go to infrequently: There's a
330 store of a particular type of material that's good for building on
331 roofs of houses or something out around there in some
332 direction. There's a good place to get water somewhere in another
333 direction. There are people that you'd like to go and visit in
334 another place, and so on.
336 [17:59] So taking in information about an extended environment and
337 building it into a structure that you can make use of for different
338 purposes is another example of offline intelligence. And when we do
339 that, we sometimes use only our brains, but in modern times, we also
340 learned how to make maps on paper and walls and so on. And it's not
341 clear whether the stuff inside our heads has the same structures as
342 the maps we make on paper: the maps on paper have a different
343 function; they may be used to communicate with others, or meant for
344 /looking/ at, whereas the stuff in your head you don't /look/ at; you
345 use it in some other way.
347 [18:46] So, what I'm getting at is that there's a great deal of human
348 intelligence (and animal intelligence) which is involved in what's
349 possible in the future, what exists in distant places, what might have
350 happened in the past (sometimes you need to know why something is as
351 it is, because that might be relevant to what you should or shouldn't
352 do in the future, and so on), and I think there was something about
353 human evolution that extended that offline intelligence way beyond
354 that of animals. And I don't think it was /just/ human language, (but
355 human language had something to do with it) but I think there was
356 something else that came earlier than language which involves the
357 ability to use your offline intelligence to discover something that
358 has a rich mathematical structure.
360 ** Example: Even toddlers use sophisticated geometric knowledge
361 #+<<example-gap>>
362 [19:44] I'll give you a simple example: if you look through a gap, you
363 can see something that's on the other side of the gap. Now, you
364 /might/ see what you want to see, or you might see only part of it. If
365 you want to see more of it, which way would you move? Well, you could
366 either move /sideways/, and see through the gap---and see it roughly
367 the same amount but a different part of it [if it's a ????], or you
368 could move /towards/ the gap and then your view will widen as you
369 approach the gap. Now, there's a bit of mathematics in there, insofar
370 as you are implicitly assuming that information travels in straight
371 lines, and as you go closer to a gap, the straight lines that you can
372 draw from where you are through the gap, widen as you approach that
373 gap. Now, there's a kind of theorem of Euclidean geometry in there
374 which I'm not going to try to state very precisely (and as far as I
375 know, wasn't stated explicitly in Euclidean geometry) but it's
376 something every toddler--- human toddler---learns. (Maybe other
377 animals also know it, I don't know.) But there are many more things,
378 actions to perform, to get you more information about things, actions
379 to perform to conceal information from other people, actions that will
380 enable you to operate, to act on a rigid object in one place in order
381 to produce an effect on another place. So, there's a lot of stuff that
382 involves lines and rotations and angles and speeds and so on that I
383 think humans (maybe, to a lesser extent, other animals) develop the
384 ability to think about in a generic way. That means that you could
385 take out the generalizations from the particular contexts and then
386 re-use them in a new contexts in ways that I think are not yet
387 represented at all in AI and in theories of human learning in any []
388 way --- although some people are trying to study learning of mathematics.
390 * Animal intelligence
392 ** The priority is /cataloguing/ what competences have evolved, not ranking them.
393 [22:03] I wasn't going to challenge the claim that humans can do more
394 sophisticated forms of [tracking], just to mention that there are some
395 things that other animals can do which are in some ways comparable,
396 and some ways superior to [things] that humans can do. In particular,
397 there are species of birds and also, I think, some rodents ---
398 squirrels, or something --- I don't know enough about the variety ---
399 that can hide nuts and remember where they've hidden them, and go back
400 to them. And there have been tests which show that some birds are able
401 to hide tens --- you know, [eighteen] or something nuts --- and to
402 remember which ones have been taken, which ones haven't, and so
403 on. And I suspect most humans can't do that. I wouldn't want to say
404 categorically that maybe we couldn't, because humans are very
405 [varied], and also [a few] people can develop particular competences
406 through training. But it's certainly not something I can do.
409 ** AI can be used to test philosophical theories
410 [23:01] But I also would like to say that I am not myself particularly
411 interested in trying to align animal intelligences according to any
412 kind of scale of superiority; I'm just trying to understand what it
413 was that biological evolution produced, and how it works, and I'm
414 interested in AI /mainly/ because I think that when one comes up with
415 theories about how these things work, one needs to have some way of
416 testing the theory. And AI provides ways of implementing and testing
417 theories that were not previously available: Immanuel Kant was trying
418 to come up with theories about how minds work, but he didn't have any
419 kind of a mechanism that he could build to test his theory about the
420 nature of mathematical knowledge, for instance, or how concepts were
421 developed from babyhood onward. Whereas now, if we do develop a
422 theory, we have a criterion of adequacy, namely it should be precise
423 enough and rich enough and detailed to enable a model to be
424 built. And then we can see if it works.
426 [24:07] If it works, it doesn't mean we've proved that the theory is
427 correct; it just shows it's a candidate. And if it doesn't work, then
428 it's not a candidate as it stands; it would need to be modified in
429 some way.
431 * Is abstract general intelligence feasible?
433 ** It's misleading to compare the brain and its neurons to a computer made of transistors
434 [24:27] I think there's a lot of optimism based on false clues:
435 the...for example, one of the false clues is to count the number of
436 neurons in the brain, and then talk about the number of transistors
437 you can fit into a computer or something, and then compare them. It
438 might turn out that the study of the way synapses work (which leads
439 some people to say that a typical synapse [] in the human brain has
440 computational power comparable to the Internet a few years ago,
441 because of the number of different molecules that are doing things,
442 the variety of types of things that are being done in those molecular
443 interactions, and the speed at which they happen, if you somehow count
444 up the number of operations per second or something, then you get
445 these comparable figures).
447 ** For example, brains may rely heavily on chemical information processing
448 Now even if the details aren't right, there may just be a lot of
449 information processing that...going on in brains at the /molecular/
450 level, not the neural level. Then, if that's the case, the processing
451 units will be orders of magnitude larger in number than the number of
452 neurons. And it's certainly the case that all the original biological
453 forms of information processing were chemical; there weren't brains
454 around, and still aren't in most microbes. And even when humans grow
455 their brains, the process of starting from a fertilized egg and
456 producing this rich and complex structure is, for much of the time,
457 under the control of chemical computations, chemical information
458 processing---of course combined with physical sorts of materials and
459 energy and so on as well.
461 [26:25] So it would seem very strange if all that capability was
462 something thrown away when you've got a brain and all the information
463 processing, the [challenges that were handled in making a brain],
464 ... This is handwaving on my part; I'm just saying that we /might/
465 learn that what brains do is not what we think they do, and that
466 problems of replicating them are not what we think they are, solely in
467 terms of numerical estimate of time scales, the number of components,
468 and so on.
470 ** Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations
471 [26:56] But apart from that, the other basis of skepticism concerns
472 how well we understand what the problems are. I think there are many
473 people who try to formalize the problems of designing an intelligent
474 system in terms of streams of information thought of as bit streams or
475 collections of bit streams, and they think of as the problems of
476 intelligence as being the construction or detection of patterns in
477 those, and perhaps not just detection of patterns, but detection of
478 patterns that are useable for sending /out/ streams to control motors
479 and so on in order to []. And that way of conceptualizing the problem
480 may lead on the one hand to oversimplification, so that the things
481 that /would/ be achieved, if those goals were achieved, maybe much
482 simpler, in some ways inadequate. Or the replication of human
483 intelligence, or the matching of human intelligence---or for that
484 matter, squirrel intelligence---but in another way, it may also make
485 the problem harder: it may be that some of the kinds of things that
486 biological evolution has achieved can't be done that way. And one of
487 the ways that might turn out to be the case is not because it's not
488 impossible in principle to do some of the information processing on
489 artificial computers-based-on-transistors and other bit-manipulating
490 []---but it may just be that the computational complexity of solving
491 problems, processes, or finding solutions to complex problems, are
492 much greater and therefore you might need a much larger universe than
493 we have available in order to do things.
495 ** Example: find the shortest path by dangling strings
496 [28:55] Then if the underlying mechanisms were different, the
497 information processing mechanisms, they might be better tailored to
498 particular sorts of computation. There's a [] example, which is
499 finding the shortest route if you've got a collection of roads, and
500 they may be curved roads, and lots of tangled routes from A to B to C,
501 and so on. And if you start at A and you want to get to Z --- a place
502 somewhere on that map --- the process of finding the shortest route
503 will involve searching through all these different possibilities and
504 rejecting some that are longer than others and so on. But if you make
505 a model of that map out of string, where these strings are all laid
506 out on the maps and so have the lengths of the routes. Then if you
507 hold the two knots in the string -- it's a network of string --- which
508 correspond to the start point and end point, then /pull/, then the
509 bits of string that you're left with in a straight line will give you
510 the shortest route, and that process of pulling just gets you the
511 solution very rapidly in a parallel computation, where all the others
512 just hang by the wayside, so to speak.
514 ** In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.
515 [30:15] Now, I'm not saying brains can build networks of string and
516 pull them or anything like that; that's just an illustration of how if
517 you have the right representation, correctly implemented---or suitably
518 implemented---for a problem, then you can avoid very combinatorially
519 complex searches, which will maybe grow exponentially with the number
520 of components in your map, whereas with this thing, the time it takes
521 won't depend on how many strings you've [got on the map]; you just
522 pull, and it will depend only on the shortest route that exists in
523 there. Even if that shortest route wasn't obvious on the original map.
526 [30:59] So that's a rather long-winded way of formulating the
527 conjecture which---of supporting, a roundabout way of supporting the
528 conjecture that there may be something about the way molecules perform
529 computations where they have the combination of continuous change as
530 things move through space and come together and move apart, and
531 whatever --- and also snap into states that then persist, so [as you
532 learn from] quantum mechanics, you can have stable molecular
533 structures which are quite hard to separate, and then in catalytic
534 processes you can separate them, or extreme temperatures, or strong
535 forces, but they may nevertheless be able to move very rapidly in some
536 conditions in order to perform computations.
538 [31:49] Now there may be things about that kind of structure that
539 enable searching for solutions to /certain/ classes of problems to be
540 done much more efficiently (by brain) than anything we could do with
541 computers. It's just an open question.
543 [32:04] So it /might/ turn out that we need new kinds of technology
544 that aren't on the horizon in order to replicate the functions that
545 animal brains perform ---or, it might not. I just don't know. I'm not
546 claiming that there's strong evidence for that; I'm just saying that
547 it might turn out that way, partly because I think we know less than
548 many people think we know about what biological evolution achieved.
550 [32:28] There are some other possibilities: we may just find out that
551 there are shortcuts no one ever thought of, and it will all happen
552 much more quickly---I have an open mind; I'd be surprised, but it
553 could turn up. There /is/ something that worries me much more than the
554 singularity that most people talk about, which is machines achieving
555 human-level intelligence and perhaps taking over [the] planet or
556 something. There's what I call the /singularity of cognitive
557 catch-up/ ...
559 * A singularity of cognitive catch-up
561 ** What if it will take a lifetime to learn enough to make something new?
562 ... SCC, singularity of cognitive catch-up, which I think we're close
563 to, or maybe have already reached---I'll explain what I mean by
564 that. One of the products of biological evolution---and this is one of
565 the answers to your earlier questions which I didn't get on to---is
566 that humans have not only the ability to make discoveries that none of
567 their ancestors have ever made, but to shorten the time required for
568 similar achievements to be reached by their offspring and their
569 descendants. So once we, for instance, worked out ways of complex
570 computations, or ways of building houses, or ways of finding our way
571 around, we don't need...our children don't need to work it out for
572 themselves by the same lengthy trial and error procedure; we can help
573 them get there much faster.
575 Okay, well, what I've been referring to as the singularity of
576 cognitive catch-up depends on the fact that---fairly obvious, and it's
577 often been commented on---that in case of humans, it's not necessary
578 for each generation to learn what previous generations learned /in the
579 same way/. And we can speed up learning once something has been
580 learned, [it is able to] be learned by new people. And that has meant
581 that the social processes that support that kind of education of the
582 young can enormously accelerate what would have taken...perhaps
583 thousands [or] millions of years for evolution to produce, can happen in
584 a much shorter time.
587 [34:54] But here's the catch: in order for a new advance to happen ---
588 so for something new to be discovered that wasn't there before, like
589 Newtonian mechanics, or the theory of relativity, or Beethoven's music
590 or [style] or whatever --- the individuals have to have traversed a
591 significant amount of what their ancestors have learned, even if they
592 do it much faster than their ancestors, to get to the point where they
593 can see the gaps, the possibilities for going further than their
594 ancestors, or their parents or whatever, have done.
596 [35:27] Now in the case of knowledge of science, mathematics,
597 philosophy, engineering and so on, there's been a lot of accumulated
598 knowledge. And humans are living a /bit/ longer than they used to, but
599 they're still living for [whatever it is], a hundred years, or for
600 most people, less than that. So you can imagine that there might come
601 a time when in a normal human lifespan, it's not possible for anyone
602 to learn enough to understand the scope and limits of what's already
603 been achieved in order to see the potential for going beyond it and to
604 build on what's already been done to make that...those future steps.
606 [36:10] So if we reach that stage, we will have reached the
607 singularity of cognitive catch-up because the process of education
608 that enables individuals to learn faster than their ancestors did is
609 the catching-up process, and it may just be that we at some point
610 reach a point where catching up can only happen within a lifetime of
611 an individual, and after that they're dead and they can't go
612 beyond. And I have some evidence that there's a lot of that around
613 because I see a lot of people coming up with what /they/ think of as
614 new ideas which they've struggled to come up with, but actually they
615 just haven't taken in some of what was...some of what was done [] by
616 other people, in other places before them. And I think that despite
617 the availability of search engines which make it /easier/ for people
618 to get the information---for instance, when I was a student, if I
619 wanted to find out what other people had done in the field, it was a
620 laborious process---going to the library, getting books, and
621 ---whereas now, I can often do things in seconds that would have taken
622 hours. So that means that if seconds [are needed] for that kind of
623 work, my lifespan has been extended by a factor of ten or
624 something. So maybe that /delays/ the singularity, but it may not
625 delay it enough. But that's an open question; I don't know. And it may
626 just be that in some areas, this is more of a problem than others. For
627 instance, it may be that in some kinds of engineering, we're handing
628 over more and more of the work to machines anyways and they can go on
629 doing it. So for instance, most of the production of computers now is
630 done by a computer-controlled machine---although some of the design
631 work is done by humans--- a lot of /detail/ of the design is done by
632 computers, and they produce the next generation, which then produces
633 the next generation, and so on.
635 [37:57] I don't know if humans can go on having major advances, so
636 it'll be kind of sad if we can't.
638 * Spatial reasoning: a difficult problem
640 [38:15] Okay, well, there are different problems [ ] mathematics, and
641 they have to do with properties. So for instance a lot of mathematics
642 that can be expressed in terms of logical structures or algebraic
643 structures and those are pretty well suited for manipulation and...on
644 computers, and if a problem can be specified using the
645 logical/algebraic notation, and the solution method requires creating
646 something in that sort of notation, then computers are pretty good,
647 and there are lots of mathematical tools around---there are theorem
648 provers and theorem checkers, and all kinds of things, which couldn't
649 have existed fifty, sixty years ago, and they will continue getting
650 better.
653 But there was something that I was [[example-gap][alluding to earlier]] when I gave the
654 example of how you can reason about what you will see by changing your
655 position in relation to a door, where what you are doing is using your
656 grasp of spatial structures and how as one spatial relationship
657 changes namely you come closer to the door or move sideways and
658 parallel to the wall or whatever, other spatial relationships change
659 in parallel, so the lines from your eyes through to other parts of
660 the...parts of the room on the other side of the doorway change,
661 spread out more as you go towards the doorway, and as you move
662 sideways, they don't spread out differently, but focus on different
663 parts of the internal ... that they access different parts of the
664 ... of the room.
666 Now, those are examples of ways of thinking about relationships and
667 changing relationships which are not the same as thinking about what
668 happens if I replace this symbol with that symbol, or if I substitute
669 this expression in that expression in a logical formula. And at the
670 moment, I do not believe that there is anything in AI amongst the
671 mathematical reasoning community, the theorem-proving community, that
672 can model the processes that go on when a young child starts learning
673 to do Euclidean geometry and is taught things about---for instance, I
674 can give you a proof that the angles of any triangle add up to a
675 straight line, 180 degrees.
677 ** Example: Spatial proof that the angles of any triangle add up to a half-circle
678 There are standard proofs which involves starting with one triangle,
679 then adding a line parallel to the base one of my former students,
680 Mary Pardoe, came up with which I will demonstrate with this <he holds
681 up a pen> --- can you see it? If I have a triangle here that's got
682 three sides, if I put this thing on it, on one side --- let's say the
683 bottom---I can rotate it until it lies along the second...another
684 side, and then maybe move it up to the other end ~. Then I can rotate
685 it again, until it lies on the third side, and move it back to the
686 other end. And then I'll rotate it again and it'll eventually end up
687 on the original side, but it will have changed the direction it's
688 pointing in --- and it won't have crossed over itself so it will have
689 gone through a half-circle, and that says that the three angles of a
690 triangle add up to the rotations of half a circle, which is a
691 beautiful kind of proof and almost anyone can understand it. Some
692 mathematicians don't like it, because they say it hides some of the
693 assumptions, but nevertheless, as far as I'm concerned, it's an
694 example of a human ability to do reasoning which, once you've
695 understood it, you can see will apply to any triangle --- it's got to
696 be a planar triangle --- not a triangle on a globe, because then the
697 angles can add up to more than ... you can have three /right/ angles
698 if you have an equator...a line on the equator, and a line going up to
699 to the north pole of the earth, and then you have a right angle and
700 then another line going down to the equator, and you have a right
701 angle, right angle, right angle, and they add up to more than a
702 straight line. But that's because the triangle isn't in the plane,
703 it's on a curved surface. In fact, that's one of the
704 differences...definitional differences you can take between planar and
705 curved surfaces: how much the angles of a triangle add up to. But our
706 ability to /visualize/ and notice the generality in that process, and
707 see that you're going to be able to do the same thing using triangles
708 that stretch in all sorts of ways, or if it's a million times as
709 large, or if it's made...you know, written on, on...if it's drawn in
710 different colors or whatever --- none of that's going to make any
711 difference to the essence of that process. And that ability to see
712 the commonality in a spatial structure which enables you to draw some
713 conclusions with complete certainty---subject to the possibility that
714 sometimes you make mistakes, but when you make mistakes, you can
715 discover them, as has happened in the history of geometrical theorem
716 proving. Imre Lakatos had a wonderful book called [[http://en.wikipedia.org/wiki/Proofs_and_Refutations][/Proofs and
717 Refutations/]] --- which I won't try to summarize --- but he has
718 examples: mistakes were made; that was because people didn't always
719 realize there were subtle subcases which had slightly different
720 properties, and they didn't take account of that. But once they're
721 noticed, you rectify that.
723 ** Geometric results are fundamentally different than experimental results in chemistry or physics.
724 [43:28] But it's not the same as doing experiments in chemistry and
725 physics, where you can't be sure it'll be the same on [] or at a high
726 temperature, or in a very strong magnetic field --- with geometric
727 reasoning, in some sense you've got the full information in front of
728 you; even if you don't always notice an important part of it. So, that
729 kind of reasoning (as far as I know) is not implemented anywhere in a
730 computer. And most people who do research on trying to model
731 mathematical reasoning, don't pay any attention to that, because of
732 ... they just don't think about it. They start from somewhere else,
733 maybe because of how they were educated. I was taught Euclidean
734 geometry at school. Were you?
736 (Adam ford: Yeah)
738 Many people are not now. Instead they're taught set theory, and
739 logic, and arithmetic, and [algebra], and so on. And so they don't use
740 that bit of their brains, without which we wouldn't have built any of
741 the cathedrals, and all sorts of things we now depend on.
743 * Is near-term artificial general intelligence likely?
745 ** Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.
747 [44:35] Well, this relates to what's meant by general. And when I
748 first encountered the AGI community, I thought that what they all
749 meant by general intelligence was /uniform/ intelligence ---
750 intelligence based on some common simple (maybe not so simple, but)
751 single powerful mechanism or principle of inference. And there are
752 some people in the community who are trying to produce things like
753 that, often in connection with algorithmic information theory and
754 computability of information, and so on. But there's another sense of
755 general which means that the system of general intelligence can do
756 lots of different things, like perceive things, understand language,
757 move around, make things, and so on --- perhaps even enjoy a joke;
758 that's something that's not nearly on the horizon, as far as I
759 know. Enjoying a joke isn't the same as being able to make laughing
760 noises.
762 Given, then, that there are these two notions of general
763 intelligence---there's one that looks for one uniform, possibly
764 simple, mechanism or collection of ideas and notations and algorithms,
765 that will deal with any problem that's solvable --- and the other
766 that's general in the sense that it can do lots of different things
767 that are combined into an integrated architecture (which raises lots
768 of questions about how you combine these things and make them work
769 together) and we humans, certainly, are of the second kind: we do all
770 sorts of different things, and other animals also seem to be of the
771 second kind, perhaps not as general as humans. Now, it may turn out
772 that in some near future time, who knows---decades, a few
773 decades---you'll be able to get machines that are capable of solving
774 in a time that will depend on the nature of the problem, but any
775 problem that is solvable, and they will be able to do it in some sort
776 of tractable time --- of course, there are some problems that are
777 solvable that would require a larger universe and a longer history
778 than the history of the universe, but apart from that constraint,
779 these machines will be able to do anything []. But to be able to do
780 some of the kinds of things that humans can do, like the kinds of
781 geometrical reasoning where you look at the shape and you abstract
782 away from the precise angles and sizes and shapes and so on, and
783 realize there's something general here, as must have happened when our
784 ancestors first made the discoveries that eventually put together in
785 Euclidean geometry.
787 It may be that that requires mechanisms of a kind that we don't know
788 anything about at the moment. Maybe brains are using molecules and
789 rearranging molecules in some way that supports that kind of
790 reasoning. I'm not saying they are --- I don't know, I just don't see
791 any simple...any obvious way to map that kind of reasoning capability
792 onto what we currently do on computers. There is---and I just
793 mentioned this briefly beforehand---there is a kind of thing that's
794 sometimes thought of as a major step in that direction, namely you can
795 build a machine (or a software system) that can represent some
796 geometrical structure, and then be told about some change that's going
797 to happen to it, and it can predict in great detail what'll
798 happen. And this happens for instance in game engines, where you say
799 we have all these blocks on the table and I'll drop one other block,
800 and then [the thing] uses Newton's laws and properties of rigidity of
801 the parts and the elasticity and also stuff about geometries and space
802 and so on, to give you a very accurate representation of what'll
803 happen when this brick lands on this pile of things, [it'll bounce and
804 go off, and so on]. And you just, with more memory and more CPU power,
805 you can increase the accuracy--- but that's totally different than
806 looking at /one/ example, and working out what will happen in a whole
807 /range/ of cases at a higher level of abstraction, whereas the game
808 engine does it in great detail for /just/ this case, with /just/ those
809 precise things, and it won't even know what the generalizations are
810 that it's using that would apply to others []. So, in that sense, [we]
811 may get AGI --- artificial general intelligence --- pretty soon, but
812 it'll be limited in what it can do. And the other kind of general
813 intelligence which combines all sorts of different things, including
814 human spatial geometrical reasoning, and maybe other things, like the
815 ability to find things funny, and to appreciate artistic features and
816 other things may need forms of pattern-mechanism, and I have an open
817 mind about that.
819 * Abstract General Intelligence impacts
821 [49:53] Well, as far as the first type's concerned, it could be useful
822 for all kinds of applications --- there are people who worry about
823 where there's a system that has that type of intelligence, might in
824 some sense take over control of the planet. Well, humans often do
825 stupid things, and they might do something stupid that would lead to
826 disaster, but I think it's more likely that there would be other
827 things [] lead to disaster--- population problems, using up all the
828 resources, destroying ecosystems, and whatever. But certainly it would
829 go on being useful to have these calculating devices. Now, as for the
830 second kind of them, I don't know---if we succeeded at putting
831 together all the parts that we find in humans, we might just make an
832 artificial human, and then we might have some of them as your friends,
833 and some of them we might not like, and some of them might become
834 teachers or whatever, composers --- but that raises a question: could
835 they, in some sense, be superior to us, in their learning
836 capabilities, their understanding of human nature, or maybe their
837 wickedness or whatever --- these are all issues in which I expect the
838 best science fiction writers would give better answers than anything I
839 could do, but I did once fantasize when I [back] in 1978, that perhaps
840 if we achieved that kind of thing, that they would be wise, and gentle
841 and kind, and realize that humans are an inferior species that, you
842 know, have some good features, so they'd keep us in some kind of
843 secluded...restrictive kind of environment, keep us away from
844 dangerous weapons, and so on. And find ways of cohabitating with
845 us. But that's just fantasy.
847 Adam Ford: Awesome. Yeah, there's an interesting story /With Folded
848 Hands/ where [the computers] want to take care of us and want to
849 reduce suffering and end up lobotomizing everybody [but] keeping them
850 alive so as to reduce the suffering.
852 Aaron Sloman: Not all that different from /Brave New World/, where it
853 was done with drugs and so on, but different humans are given
854 different roles in that system, yeah.
856 There's also /The Time Machine/, H.G. Wells, where the ... in the
857 distant future, humans have split in two: the Eloi, I think they were
858 called, they lived underground, they were the [] ones, and then---no,
859 the Morlocks lived underground; Eloi lived on the planet; they were
860 pleasant and pretty but not very bright, and so on, and they were fed
861 on by ...
863 Adam Ford: [] in the future.
865 Aaron Sloman: As I was saying, if you ask science fiction writers,
866 you'll probably come up with a wide variety of interesting answers.
868 Adam Ford: I certainly have; I've spoken to [] of Birmingham, and
869 Sean Williams, ... who else?
871 Aaron Sloman: Did you ever read a story by E.M. Forrester called /The
872 Machine Stops/ --- very short story, it's [[http://archive.ncsa.illinois.edu/prajlich/forster.html][on the Internet somewhere]]
873 --- it's about a time when people sitting ... and this was written in
874 about [1914 ] so it's about...over a hundred years ago ... people are
875 in their rooms, they sit in front of screens, and they type things,
876 and they communicate with one another that way, and they don't meet;
877 they have debates, and they give lectures to their audiences that way,
878 and then there's a woman whose son says \ldquo{}I'd like to see
879 you\rdquo{} and she says \ldquo{}What's the point? You've got me at
880 this point \rdquo{} but he wants to come and talk to her --- I won't
881 tell you how it ends, but.
883 Adam Ford: Reminds me of the Internet.
885 Aaron Sloman: Well, yes; he invented ... it was just extraordinary
886 that he was able to do that, before most of the components that we
887 need for it existed.
889 Adam Ford: [Another person who did that] was Vernor Vinge [] /True
890 Names/.
892 Aaron Sloman: When was that written?
894 Adam Ford: The seventies.
896 Aaron Sloman: Okay, well a lot of the technology was already around
897 then. The original bits of internet were working, in about 1973, I was
898 sitting ... 1974, I was sitting at Sussex University trying to
899 use...learn LOGO, the programming language, to decide whether it was
900 going to be useful for teaching AI, and I was sitting [] paper
901 teletype, there was paper coming out, transmitting ten characters a
902 second from Sussex to UCL computer lab by telegraph cable, from there
903 to somewhere in Norway via another cable, from there by satellite to
904 California to a computer Xerox [] research center where they had
905 implemented a computer with a LOGO system on it, with someone I had
906 met previously in Edinburgh, Danny Bobrow, and he allowed me to have
907 access to this sytem. So there I was typing. And furthermore, it was
908 duplex typing, so every character I typed didn't show up on my
909 terminal until it had gone all the way there and echoed back, so I
910 would type, and the characters would come back four seconds later.
912 [55:26] But that was the Internet, and I think Vernor Vinge was
913 writing after that kind of thing had already started, but I don't
914 know. Anyway.
916 [55:41] Another...I mentioned H.G. Wells, /The Time Machine/. I
917 recently discovered, because [[http://en.wikipedia.org/wiki/David_Lodge_(author)][David Lodge]] had written a sort of
918 semi-novel about him, that he had invented Wikipedia, in advance--- he
919 had this notion of an encyclopedia that was free to everybody, and
920 everybody could contribute and [collaborate on it]. So, go to the
921 science fiction writers to find out the future --- well, a range of
922 possible futures.
924 Adam Ford: Well the thing is with science fiction writers, they have
925 to maintain some sort of interest for their readers, after all the
926 science fiction which reaches us is the stuff that publishers want to
927 sell, and so there's a little bit of a ... a bias towards making a
928 plot device there, and so the dramatic sort of appeals to our
929 amygdala, our lizard brain; we'll sort of stay there obviously to some
930 extent. But I think that they do come up with sort of amazing ideas; I
931 think it's worth trying to make these predictions; I think that we
932 should more time on strategic forecasting, I mean take that seriously.
934 Aaron Sloman: Well, I'm happy to leave that to others; I just want to
935 try to understand these problems that bother me about how things
936 work. And it may be that some would say that's irresponsible if I
937 don't think about what the implications will be. Well, understanding
938 how humans work /might/ enable us to make [] humans --- I suspect it
939 wont happen in this century; I think it's going to be too difficult.