view org/sloman-old.html @ 126:72c6ede12806

fix dangling SOM description.
author Robert McIntyre <rlm@mit.edu>
date Thu, 05 Jun 2014 16:11:29 -0400
parents 414a10d51d9f
children
line wrap: on
line source
1 <?xml version="1.0" encoding="utf-8"?>
2 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
3 "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
4 <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
5 <head>
6 <title>Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview</title>
7 <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
8 <meta name="title" content="Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview"/>
9 <meta name="generator" content="Org-mode"/>
10 <meta name="generated" content="2013-10-04 18:49:53 UTC"/>
11 <meta name="author" content="Dylan Holmes"/>
12 <meta name="description" content=""/>
13 <meta name="keywords" content=""/>
14 <style type="text/css">
15 <!--/*--><![CDATA[/*><!--*/
16 html { font-family: Times, serif; font-size: 12pt; }
17 .title { text-align: center; }
18 .todo { color: red; }
19 .done { color: green; }
20 .tag { background-color: #add8e6; font-weight:normal }
21 .target { }
22 .timestamp { color: #bebebe; }
23 .timestamp-kwd { color: #5f9ea0; }
24 .right {margin-left:auto; margin-right:0px; text-align:right;}
25 .left {margin-left:0px; margin-right:auto; text-align:left;}
26 .center {margin-left:auto; margin-right:auto; text-align:center;}
27 p.verse { margin-left: 3% }
28 pre {
29 border: 1pt solid #AEBDCC;
30 background-color: #F3F5F7;
31 padding: 5pt;
32 font-family: courier, monospace;
33 font-size: 90%;
34 overflow:auto;
35 }
36 table { border-collapse: collapse; }
37 td, th { vertical-align: top; }
38 th.right { text-align:center; }
39 th.left { text-align:center; }
40 th.center { text-align:center; }
41 td.right { text-align:right; }
42 td.left { text-align:left; }
43 td.center { text-align:center; }
44 dt { font-weight: bold; }
45 div.figure { padding: 0.5em; }
46 div.figure p { text-align: center; }
47 div.inlinetask {
48 padding:10px;
49 border:2px solid gray;
50 margin:10px;
51 background: #ffffcc;
52 }
53 textarea { overflow-x: auto; }
54 .linenr { font-size:smaller }
55 .code-highlighted {background-color:#ffff00;}
56 .org-info-js_info-navigation { border-style:none; }
57 #org-info-js_console-label { font-size:10px; font-weight:bold;
58 white-space:nowrap; }
59 .org-info-js_search-highlight {background-color:#ffff00; color:#000000;
60 font-weight:bold; }
61 /*]]>*/-->
62 </style>
63 <link rel="stylesheet" type="text/css" href="../css/sloman.css" />
64 <script type="text/javascript">
65 <!--/*--><![CDATA[/*><!--*/
66 function CodeHighlightOn(elem, id)
67 {
68 var target = document.getElementById(id);
69 if(null != target) {
70 elem.cacheClassElem = elem.className;
71 elem.cacheClassTarget = target.className;
72 target.className = "code-highlighted";
73 elem.className = "code-highlighted";
74 }
75 }
76 function CodeHighlightOff(elem, id)
77 {
78 var target = document.getElementById(id);
79 if(elem.cacheClassElem)
80 elem.className = elem.cacheClassElem;
81 if(elem.cacheClassTarget)
82 target.className = elem.cacheClassTarget;
83 }
84 /*]]>*///-->
85 </script>
87 </head>
88 <body>
91 <div id="content">
92 <h1 class="title">Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview</h1>
95 <blockquote>
112 <p>
113 <b>Editor's note:</b> This is a working draft transcript which I made of
114 <a href="http://www.youtube.com/watch?feature=player_detailpage&amp;v=iuH8dC7Snno">this nice interview</a> of Aaron Sloman. Having just finished one
115 iteration of transcription, I still need to go in and clean up the
116 formatting and fix the parts that I misheard, so you can expect the
117 text to improve significantly in the near future.
118 </p>
119 <p>
120 To the extent that this is my work, you have my permission to make
121 copies of this transcript for your own purposes. Also, feel free to
122 e-mail me with comments or corrections.
123 </p>
124 <p>
125 You can send mail to <code>transcript@aurellem.org</code>.
126 </p>
127 <p>
128 Cheers,
129 </p>
130 <p>
131 &mdash;Dylan
132 </p>
133 </blockquote>
139 <div id="table-of-contents">
140 <h2>Table of Contents</h2>
141 <div id="text-table-of-contents">
142 <ul>
143 <li><a href="#sec-1">1 Introduction</a>
144 <ul>
145 <li><a href="#sec-1-1">1.1 Aaron Sloman evolves into a philosopher of AI</a></li>
146 <li><a href="#sec-1-2">1.2 AI is hard, in part because there are tempting non-problems.</a></li>
147 </ul>
148 </li>
149 <li><a href="#sec-2">2 What problems of intelligence did evolution solve?</a>
150 <ul>
151 <li><a href="#sec-2-1">2.1 Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.</a></li>
152 <li><a href="#sec-2-2">2.2 Speculation about how communication might have evolved from internal lanagues.</a></li>
153 </ul>
154 </li>
155 <li><a href="#sec-3">3 How do language and internal states relate to AI?</a>
156 <ul>
157 <li><a href="#sec-3-1">3.1 In AI, false assumptions can lead investigators astray.</a></li>
158 <li><a href="#sec-3-2">3.2 Example: Vision is not just about finding surfaces, but about finding affordances.</a></li>
159 <li><a href="#sec-3-3">3.3 Online and offline intelligence</a></li>
160 <li><a href="#sec-3-4">3.4 Example: Even toddlers use sophisticated geometric knowledge</a></li>
161 </ul>
162 </li>
163 <li><a href="#sec-4">4 Animal intelligence</a>
164 <ul>
165 <li><a href="#sec-4-1">4.1 The priority is <i>cataloguing</i> what competences have evolved, not ranking them.</a></li>
166 <li><a href="#sec-4-2">4.2 AI can be used to test philosophical theories</a></li>
167 </ul>
168 </li>
169 <li><a href="#sec-5">5 Is abstract general intelligence feasible?</a>
170 <ul>
171 <li><a href="#sec-5-1">5.1 It's misleading to compare the brain and its neurons to a computer made of transistors</a></li>
172 <li><a href="#sec-5-2">5.2 For example, brains may rely heavily on chemical information processing</a></li>
173 <li><a href="#sec-5-3">5.3 Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations</a></li>
174 <li><a href="#sec-5-4">5.4 Example: find the shortest path by dangling strings</a></li>
175 <li><a href="#sec-5-5">5.5 In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.</a></li>
176 </ul>
177 </li>
178 <li><a href="#sec-6">6 A singularity of cognitive catch-up</a>
179 <ul>
180 <li><a href="#sec-6-1">6.1 What if it will take a lifetime to learn enough to make something new?</a></li>
181 </ul>
182 </li>
183 <li><a href="#sec-7">7 Spatial reasoning: a difficult problem</a>
184 <ul>
185 <li><a href="#sec-7-1">7.1 Example: Spatial proof that the angles of any triangle add up to a half-circle</a></li>
186 <li><a href="#sec-7-2">7.2 Geometric results are fundamentally different than experimental results in chemistry or physics.</a></li>
187 </ul>
188 </li>
189 <li><a href="#sec-8">8 Is near-term artificial general intelligence likely?</a>
190 <ul>
191 <li><a href="#sec-8-1">8.1 Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.</a></li>
192 </ul>
193 </li>
194 <li><a href="#sec-9">9 Abstract General Intelligence impacts</a></li>
195 </ul>
196 </div>
197 </div>
199 <div id="outline-container-1" class="outline-2">
200 <h2 id="sec-1"><span class="section-number-2">1</span> Introduction</h2>
201 <div class="outline-text-2" id="text-1">
205 </div>
207 <div id="outline-container-1-1" class="outline-3">
208 <h3 id="sec-1-1"><span class="section-number-3">1.1</span> Aaron Sloman evolves into a philosopher of AI</h3>
209 <div class="outline-text-3" id="text-1-1">
211 <p>[0:09] My name is Aaron Sloman. My first degree many years ago in
212 Capetown University was in Physics and Mathematics, and I intended to
213 go and be a mathematician. I came to Oxford and encountered
214 philosophers &mdash; I had started reading philosophy and discussing
215 philosophy before then, and then I found that there were philosophers
216 who said things about mathematics that I thought were wrong, so
217 gradually got more and more involved in [philosophy] discussions and
218 switched to doing philosophy DPhil. Then I became a philosophy
219 lecturer and about six years later, I was introduced to artificial
220 intelligence when I was a lecturer at Sussex University in philosophy
221 and I very soon became convinced that the best way to make progress in
222 both areas of philosophy (including philosophy of mathematics which I
223 felt i hadn't dealt with adequately in my DPhil) about the philosophy
224 of mathematics, philosophy of mind, philsophy of language and all
225 those things&mdash;the best way was to try to design and test working
226 fragments of mind and maybe eventually put them all together but
227 initially just working fragments that would do various things.
228 </p>
229 <p>
230 [1:12] And I learned to program and ~ with various other people
231 including ~Margaret Boden whom you've interviewed, developed&mdash;helped
232 develop an undergraduate degree in AI and other things and also began
233 to do research in AI and so on which I thought of as doing philosophy,
234 primarily.
235 </p>
236 <p>
237 [1:29] And then I later moved to the University of Birmingham and I
238 was there &mdash; I came in 1991 &mdash; and I've been retired for a while but
239 I'm not interested in golf or gardening so I just go on doing full
240 time research and my department is happy to keep me on without paying
241 me and provide space and resources and I come, meeting bright people
242 at conferences and try to learn and make progress if I can.
243 </p>
244 </div>
246 </div>
248 <div id="outline-container-1-2" class="outline-3">
249 <h3 id="sec-1-2"><span class="section-number-3">1.2</span> AI is hard, in part because there are tempting non-problems.</h3>
250 <div class="outline-text-3" id="text-1-2">
253 <p>
254 One of the things I learnt and understood more and more over the many
255 years &mdash; forty years or so since I first encountered AI &mdash; is how
256 hard the problems are, and in part that's because it's very often
257 tempting to <i>think</i> the problem is something different from what it
258 actually is, and then people design solutions to the non-problems, and
259 I think of most of my work now as just helping to clarify what the
260 problems are: what is it that we're trying to explain &mdash; and maybe
261 this is leading into what you wanted to talk about:
262 </p>
263 <p>
264 I now think that one of the ways of getting a deep understanding of
265 that is to find out what were the problems that biological evolution
266 solved, because we are a product of <i>many</i> solutions to <i>many</i>
267 problems, and if we just try to go in and work out what the whole
268 system is doing, we may get it all wrong, or badly wrong.
269 </p>
271 </div>
272 </div>
274 </div>
276 <div id="outline-container-2" class="outline-2">
277 <h2 id="sec-2"><span class="section-number-2">2</span> What problems of intelligence did evolution solve?</h2>
278 <div class="outline-text-2" id="text-2">
282 </div>
284 <div id="outline-container-2-1" class="outline-3">
285 <h3 id="sec-2-1"><span class="section-number-3">2.1</span> Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.</h3>
286 <div class="outline-text-3" id="text-2-1">
289 <p>
290 [2:57] Well, first I would challenge that we are the dominant
291 species. I know it looks like that but actually if you count biomass,
292 if you count number of species, if you count number of individuals,
293 the dominant species are microbes &mdash; maybe not one of them but anyway
294 they're the ones who dominate in that sense, and furthermore we are
295 mostly &mdash; we are largely composed of microbes, without which we
296 wouldn't survive.
297 </p>
299 <p>
300 [3:27] But there are things that make humans (you could say) best at
301 those things, or worst at those things, but it's a combination. And I
302 think it was a collection of developments of which there isn't any
303 single one. [] there might be, some people say, human language which
304 changed everything. By our human language, they mean human
305 communication in words, but I think that was a later development from
306 what must have started as the use of <i>internal</i> forms of
307 representation &mdash; which are there in nest-building birds, in
308 pre-verbal children, in hunting mammals &mdash; because you can't take in
309 information about a complex structured environment in which things can
310 change and you may have to be able to work out what's possible and
311 what isn't possible, without having some way of representing the
312 components of the environment, their relationships, the kinds of
313 things they can and can't do, the kinds of things you might or might
314 not be able to do &mdash; and <i>that</i> kind of capability needs internal
315 languages, and I and colleagues [at Birmingham] have been referring to
316 them as generalized languages because some people object to
317 referring&hellip;to using language to refer to something that isn't used
318 for communication. But from that viewpoint, not only humans but many
319 other animals developed abilities to do things to their environment to
320 make them more friendly to themselves, which depended on being able to
321 represent possible futures, possible actions, and work out what's the
322 best thing to do.
323 </p>
324 <p>
325 [5:13] And nest-building in corvids for instance&mdash;crows, magpies,
326 [hawks], and so on &mdash; are way beyond what current robots can do, and
327 in fact I think most humans would be challenged if they had to go and
328 find a collection of twigs, one at a time, maybe bring them with just
329 one hand &mdash; or with your mouth &mdash; and assemble them into a
330 structure that, you know, is shaped like a nest, and is fairly rigid,
331 and you could trust your eggs in them when wind blows. But they're
332 doing it, and so &hellip; they're not our evolutionary ancestors, but
333 they're an indication &mdash; and that example is an indication &mdash; of
334 what must have evolved in order to provide control over the
335 environment in <i>that</i> species.
336 </p>
337 </div>
339 </div>
341 <div id="outline-container-2-2" class="outline-3">
342 <h3 id="sec-2-2"><span class="section-number-3">2.2</span> Speculation about how communication might have evolved from internal lanagues.</h3>
343 <div class="outline-text-3" id="text-2-2">
345 <p>[5:56] And I think hunting mammals, fruit-picking mammals, mammals
346 that can rearrange parts of the environment, provide shelters, needed
347 to have &hellip;. also needed to have ways of representing possible
348 futures, not just what's there in the environment. I think at a later
349 stage, that developed into a form of communication, or rather the
350 <i>internal</i> forms of representation became usable as a basis for
351 providing [context] to be communicated. And that happened, I think,
352 initially through performing actions that expressed intentions, and
353 probably led to situtations where an action (for instance, moving some
354 large object) was performed more easily, or more successfully, or more
355 accurately if it was done collaboratively. So someone who had worked
356 out what to do might start doing it, and then a conspecific might be
357 able to work out what the intention is, because that person has the
358 <i>same</i> forms of representation and can build theories about what's
359 going on, and might then be able to help.
360 </p>
361 <p>
362 [7:11] You can imagine that if that started happening more (a lot of
363 collaboration based on inferred intentions and plans) then sometimes
364 the inferences might be obscure and difficult, so the <i>actions</i> might
365 be enhanced to provide signals as to what the intention is, and what
366 the best way is to help, and so on.
367 </p>
368 <p>
369 [7:35] So, this is all handwaving and wild speculation, but I think
370 it's consistent with a large collection of facts which one can look at
371 &mdash; and find if one looks for them, but one won't know if [some]one
372 doesn't look for them &mdash; about the way children, for instance, who
373 can't yet talk, communicate, and the things they'll do, like going to
374 the mother and turning the face to point in the direction where the
375 child wants it to look and so on; that's an extreme version of action
376 indicating intention.
377 </p>
378 <p>
379 [8:03] Anyway. That's a very long roundabout answer to one conjecture
380 that the use of communicative language is what gave humans their
381 unique power to create and destroy and whatever, and I'm saying that
382 if by that you mean <i>communicative</i> language, then I'm saying there
383 was something before that which was <i>non</i>-communicative language, and I
384 suspect that noncommunicative language continues to play a deep role
385 in <i>all</i> human perception &mdash;in mathematical and scientific reasoning, in
386 problem solving &mdash; and we don't understand very much about it.
387 </p>
388 <p>
389 [8:48]
390 I'm sure there's a lot more to be said about the development of
391 different kinds of senses, the development of brain structures and
392 mechanisms is above all that, but perhaps I've droned on long enough
393 on that question.
394 </p>
396 </div>
397 </div>
399 </div>
401 <div id="outline-container-3" class="outline-2">
402 <h2 id="sec-3"><span class="section-number-2">3</span> How do language and internal states relate to AI?</h2>
403 <div class="outline-text-2" id="text-3">
406 <p>
407 [9:09] Well, I think most of the human and animal capabilities that
408 I've been referring to are not yet to be found in current robots or
409 [computing] systems, and I think there are two reasons for that: one
410 is that it's intrinsically very difficult; I think that in particular
411 it may turn out that the forms of information processing that one can
412 implement on digital computers as we currently know them may not be as
413 well suited to performing some of these tasks as other kinds of
414 computing about which we don't know so much &mdash; for example, I think
415 there may be important special features about <i>chemical</i> computers
416 which we might [talk about in a little bit? find out about].
417 </p>
419 </div>
421 <div id="outline-container-3-1" class="outline-3">
422 <h3 id="sec-3-1"><span class="section-number-3">3.1</span> In AI, false assumptions can lead investigators astray.</h3>
423 <div class="outline-text-3" id="text-3-1">
425 <p>[9:57] So, one of the problems then is that the tasks are hard &hellip; but
426 there's a deeper problem as to why AI hasn't made a great deal of
427 progress on these problems that I'm talking about, and that is that
428 most AI researchers assume things&mdash;and this is not just AI
429 researchers, but [also] philsophers, and psychologists, and people
430 studying animal behavior&mdash;make assumptions about what it is that
431 animals or humans do, for instance make assumptions about what vision
432 is for, or assumptions about what motivation is and how motivation
433 works, or assumptions about how learning works, and then they try ---
434 the AI people try &mdash; to model [or] build systems that perform those
435 assumed functions. So if you get the <i>functions</i> wrong, then even if
436 you implement some of the functions that you're trying to implement,
437 they won't necessarily perform the tasks that the initial objective
438 was to imitate, for instance the tasks that humans, and nest-building
439 birds, and monkeys and so on can perform.
440 </p>
441 </div>
443 </div>
445 <div id="outline-container-3-2" class="outline-3">
446 <h3 id="sec-3-2"><span class="section-number-3">3.2</span> Example: Vision is not just about finding surfaces, but about finding affordances.</h3>
447 <div class="outline-text-3" id="text-3-2">
449 <p>[11:09] I'll give you a simple example &mdash; well, maybe not so simple,
450 but &mdash; It's often assumed that the function of vision in humans (and
451 in other animals with good eyesight and so on) is to take in optical
452 information that hits the retina, and form into the (maybe changing
453 &mdash; or, really, in our case definitely changing) patterns of
454 illumination where there are sensory receptors that detect those
455 patterns, and then somehow from that information (plus maybe other
456 information gained from head movement or from comparisons between two
457 eyes) to work out what there was in the environment that produced
458 those patterns, and that is often taken to mean &ldquo;where were the
459 surfaces off which the light bounced before it came to me&rdquo;. So
460 you essentially think of the task of the visual system as being to
461 reverse the image formation process: so the 3D structure's there, the
462 lens causes the image to form in the retina, and then the brain goes
463 back to a model of that 3D structure there. That's a very plausible
464 theory about vision, and it may be that that's a <i>subset</i> of what
465 human vision does, but I think James Gibson pointed out that that kind
466 of thing is not necessarily going to be very useful for an organism,
467 and it's very unlikely that that's the main function of perception in
468 general, namely to produce some physical description of what's out
469 there.
470 </p>
471 <p>
472 [12:37] What does an animal <i>need</i>? It needs to know what it can do,
473 what it can't do, what the consequences of its actions will be
474 &hellip;. so, he introduced the word <i>affordance</i>, so from his point of
475 view, the function of vision, perception, are to inform the organism
476 of what the <i>affordances</i> are for action, where that would mean what
477 the animal, <i>given</i> its morphology (what it can do with its mouth, its
478 limbs, and so on, and the ways it can move) what it can do, what its
479 needs are, what the obstacles are, and how the environment supports or
480 obstructs those possible actions.
481 </p>
482 <p>
483 [13:15] And that's a very different collection of information
484 structures that you need from, say, &ldquo;where are all the
485 surfaces?&rdquo;: if you've got all the surfaces, <i>deriving</i> the
486 affordances would still be a major task. So, if you think of the
487 perceptual system as primarily (for biological organisms) being
488 devices that provide information about affordances and so on, then the
489 tasks look very different. And most of the people working, doing
490 research on computer vision in robots, I think haven't taken all that
491 on board, so they're trying to get machines to do things which, even
492 if they were successful, would not make the robots very intelligent
493 (and in fact, even the ones they're trying to do are not really easy
494 to do, and they don't succeed very well&mdash; although, there's progress;
495 I shouldn't disparage it too much.)
496 </p>
497 </div>
499 </div>
501 <div id="outline-container-3-3" class="outline-3">
502 <h3 id="sec-3-3"><span class="section-number-3">3.3</span> Online and offline intelligence</h3>
503 <div class="outline-text-3" id="text-3-3">
506 <p>
507 [14:10] It gets more complex as animals get more sophisticated. So, I
508 like to make a distinction between online intelligence and offline
509 intelligence. So, for example, if I want to pick something up &mdash; like
510 this leaf &lt;he plucks a leaf from the table&gt; &mdash; I was able to select
511 it from all the others in there, and while moving my hand towards it,
512 I was able to guide its trajectory, making sure it was going roughly
513 in the right direction &mdash; as opposed to going out there, which
514 wouldn't have been able to pick it up &mdash; and these two fingers ended
515 up with a portion of the leaf between them, so that I was able to tell
516 when I'm ready to do that &lt;he clamps the leaf between two fingers&gt;
517 and at that point, I clamped my fingers and then I could pick up the
518 leaf.
519 </p>
520 <p>
521 [14:54] Whereas, &mdash; and that's an example of online intelligence:
522 during the performance of an action (both from the stage where it's
523 initiated, and during the intermediate stages, and where it's
524 completed) I'm taking in information relevant to controlling all those
525 stages, and that relevant information keeps changing. That means I
526 need stores of transient information which gets discarded almost
527 immediately and replaced or something. That's online intelligence. And
528 there are many forms; that's just one example, and Gibson discussed
529 quite a lot of examples which I won't try to replicate now.
530 </p>
531 <p>
532 [15:30] But in offline intelligence, you're not necessarily actually
533 <i>performing</i> the actions when you're using your intelligence; you're
534 thinking about <i>possible</i> actions. So, for instance, I could think
535 about how fast or by what route I would get back to the lecture room
536 if I wanted to [get to the next talk] or something. And I know where
537 the door is, roughly speaking, and I know roughly which route I would
538 take, when I go out, I should go to the left or to the right, because
539 I've stored information about where the spaces are, where the
540 buildings are, where the door was that we came out &mdash; but in using
541 that information to think about that route, I'm not actually
542 performing the action. I'm not even <i>simulating</i> it in detail: the
543 precise details of direction and speed and when to clamp my fingers,
544 or when to contract my leg muscles when walking, are all irrelevant to
545 thinking about a good route, or thinking about the potential things
546 that might happen on the way. Or what would be a good place to meet
547 someone who I think [for an acquaintance in particular] &mdash; [barber]
548 or something &mdash; I don't necessarily have to work out exactly <i>where</i>
549 the person's going to stand, or from what angle I would recognize
550 them, and so on.
551 </p>
552 <p>
553 [16:46] So, offline intelligence &mdash; which I think became not just a
554 human competence; I think there are other animals that have aspects of
555 it: Squirrels are very impressive as you watch them. Gray squirrels at
556 any rate, as you watch them defeating squirrel-proof birdfeeders, seem
557 to have a lot of that [offline intelligence], as well as the online
558 intelligence when they eventually perform the action they've worked
559 out [] that will get them to the nuts.
560 </p>
561 <p>
562 [17:16] And I think that what happened during our evolution is that
563 mechanisms for acquiring and processing and storing and manipulating
564 information that is more and more remote from the performance of
565 actions developed. An example is taking in information about where
566 locations are that you might need to go to infrequently: There's a
567 store of a particular type of material that's good for building on
568 roofs of houses or something out around there in some
569 direction. There's a good place to get water somewhere in another
570 direction. There are people that you'd like to go and visit in
571 another place, and so on.
572 </p>
573 <p>
574 [17:59] So taking in information about an extended environment and
575 building it into a structure that you can make use of for different
576 purposes is another example of offline intelligence. And when we do
577 that, we sometimes use only our brains, but in modern times, we also
578 learned how to make maps on paper and walls and so on. And it's not
579 clear whether the stuff inside our heads has the same structures as
580 the maps we make on paper: the maps on paper have a different
581 function; they may be used to communicate with others, or meant for
582 <i>looking</i> at, whereas the stuff in your head you don't <i>look</i> at; you
583 use it in some other way.
584 </p>
585 <p>
586 [18:46] So, what I'm getting at is that there's a great deal of human
587 intelligence (and animal intelligence) which is involved in what's
588 possible in the future, what exists in distant places, what might have
589 happened in the past (sometimes you need to know why something is as
590 it is, because that might be relevant to what you should or shouldn't
591 do in the future, and so on), and I think there was something about
592 human evolution that extended that offline intelligence way beyond
593 that of animals. And I don't think it was <i>just</i> human language, (but
594 human language had something to do with it) but I think there was
595 something else that came earlier than language which involves the
596 ability to use your offline intelligence to discover something that
597 has a rich mathematical structure.
598 </p>
599 </div>
601 </div>
603 <div id="outline-container-3-4" class="outline-3">
604 <h3 id="sec-3-4"><a name="example-gap" id="example-gap"></a><span class="section-number-3">3.4</span> Example: Even toddlers use sophisticated geometric knowledge</h3>
605 <div class="outline-text-3" id="text-3-4">
607 <p>[19:44] I'll give you a simple example: if you look through a gap, you
608 can see something that's on the other side of the gap. Now, you
609 <i>might</i> see what you want to see, or you might see only part of it. If
610 you want to see more of it, which way would you move? Well, you could
611 either move <i>sideways</i>, and see through the gap&mdash;and see it roughly
612 the same amount but a different part of it [if it's a ????], or you
613 could move <i>towards</i> the gap and then your view will widen as you
614 approach the gap. Now, there's a bit of mathematics in there, insofar
615 as you are implicitly assuming that information travels in straight
616 lines, and as you go closer to a gap, the straight lines that you can
617 draw from where you are through the gap, widen as you approach that
618 gap. Now, there's a kind of theorem of Euclidean geometry in there
619 which I'm not going to try to state very precisely (and as far as I
620 know, wasn't stated explicitly in Euclidean geometry) but it's
621 something every toddler&mdash; human toddler&mdash;learns. (Maybe other
622 animals also know it, I don't know.) But there are many more things,
623 actions to perform, to get you more information about things, actions
624 to perform to conceal information from other people, actions that will
625 enable you to operate, to act on a rigid object in one place in order
626 to produce an effect on another place. So, there's a lot of stuff that
627 involves lines and rotations and angles and speeds and so on that I
628 think humans (maybe, to a lesser extent, other animals) develop the
629 ability to think about in a generic way. That means that you could
630 take out the generalizations from the particular contexts and then
631 re-use them in a new contexts in ways that I think are not yet
632 represented at all in AI and in theories of human learning in any []
633 way &mdash; although some people are trying to study learning of mathematics.
634 </p>
635 </div>
636 </div>
638 </div>
640 <div id="outline-container-4" class="outline-2">
641 <h2 id="sec-4"><span class="section-number-2">4</span> Animal intelligence</h2>
642 <div class="outline-text-2" id="text-4">
646 </div>
648 <div id="outline-container-4-1" class="outline-3">
649 <h3 id="sec-4-1"><span class="section-number-3">4.1</span> The priority is <i>cataloguing</i> what competences have evolved, not ranking them.</h3>
650 <div class="outline-text-3" id="text-4-1">
652 <p>[22:03] I wasn't going to challenge the claim that humans can do more
653 sophisticated forms of [tracking], just to mention that there are some
654 things that other animals can do which are in some ways comparable,
655 and some ways superior to [things] that humans can do. In particular,
656 there are species of birds and also, I think, some rodents ---
657 squirrels, or something &mdash; I don't know enough about the variety ---
658 that can hide nuts and remember where they've hidden them, and go back
659 to them. And there have been tests which show that some birds are able
660 to hide tens &mdash; you know, [eighteen] or something nuts &mdash; and to
661 remember which ones have been taken, which ones haven't, and so
662 on. And I suspect most humans can't do that. I wouldn't want to say
663 categorically that maybe we couldn't, because humans are very
664 [varied], and also [a few] people can develop particular competences
665 through training. But it's certainly not something I can do.
666 </p>
668 </div>
670 </div>
672 <div id="outline-container-4-2" class="outline-3">
673 <h3 id="sec-4-2"><span class="section-number-3">4.2</span> AI can be used to test philosophical theories</h3>
674 <div class="outline-text-3" id="text-4-2">
676 <p>[23:01] But I also would like to say that I am not myself particularly
677 interested in trying to align animal intelligences according to any
678 kind of scale of superiority; I'm just trying to understand what it
679 was that biological evolution produced, and how it works, and I'm
680 interested in AI <i>mainly</i> because I think that when one comes up with
681 theories about how these things work, one needs to have some way of
682 testing the theory. And AI provides ways of implementing and testing
683 theories that were not previously available: Immanuel Kant was trying
684 to come up with theories about how minds work, but he didn't have any
685 kind of a mechanism that he could build to test his theory about the
686 nature of mathematical knowledge, for instance, or how concepts were
687 developed from babyhood onward. Whereas now, if we do develop a
688 theory, we have a criterion of adequacy, namely it should be precise
689 enough and rich enough and detailed to enable a model to be
690 built. And then we can see if it works.
691 </p>
692 <p>
693 [24:07] If it works, it doesn't mean we've proved that the theory is
694 correct; it just shows it's a candidate. And if it doesn't work, then
695 it's not a candidate as it stands; it would need to be modified in
696 some way.
697 </p>
698 </div>
699 </div>
701 </div>
703 <div id="outline-container-5" class="outline-2">
704 <h2 id="sec-5"><span class="section-number-2">5</span> Is abstract general intelligence feasible?</h2>
705 <div class="outline-text-2" id="text-5">
709 </div>
711 <div id="outline-container-5-1" class="outline-3">
712 <h3 id="sec-5-1"><span class="section-number-3">5.1</span> It's misleading to compare the brain and its neurons to a computer made of transistors</h3>
713 <div class="outline-text-3" id="text-5-1">
715 <p>[24:27] I think there's a lot of optimism based on false clues:
716 the&hellip;for example, one of the false clues is to count the number of
717 neurons in the brain, and then talk about the number of transistors
718 you can fit into a computer or something, and then compare them. It
719 might turn out that the study of the way synapses work (which leads
720 some people to say that a typical synapse [] in the human brain has
721 computational power comparable to the Internet a few years ago,
722 because of the number of different molecules that are doing things,
723 the variety of types of things that are being done in those molecular
724 interactions, and the speed at which they happen, if you somehow count
725 up the number of operations per second or something, then you get
726 these comparable figures).
727 </p>
728 </div>
730 </div>
732 <div id="outline-container-5-2" class="outline-3">
733 <h3 id="sec-5-2"><span class="section-number-3">5.2</span> For example, brains may rely heavily on chemical information processing</h3>
734 <div class="outline-text-3" id="text-5-2">
736 <p>Now even if the details aren't right, there may just be a lot of
737 information processing that&hellip;going on in brains at the <i>molecular</i>
738 level, not the neural level. Then, if that's the case, the processing
739 units will be orders of magnitude larger in number than the number of
740 neurons. And it's certainly the case that all the original biological
741 forms of information processing were chemical; there weren't brains
742 around, and still aren't in most microbes. And even when humans grow
743 their brains, the process of starting from a fertilized egg and
744 producing this rich and complex structure is, for much of the time,
745 under the control of chemical computations, chemical information
746 processing&mdash;of course combined with physical sorts of materials and
747 energy and so on as well.
748 </p>
749 <p>
750 [26:25] So it would seem very strange if all that capability was
751 something thrown away when you've got a brain and all the information
752 processing, the [challenges that were handled in making a brain],
753 &hellip; This is handwaving on my part; I'm just saying that we <i>might</i>
754 learn that what brains do is not what we think they do, and that
755 problems of replicating them are not what we think they are, solely in
756 terms of numerical estimate of time scales, the number of components,
757 and so on.
758 </p>
759 </div>
761 </div>
763 <div id="outline-container-5-3" class="outline-3">
764 <h3 id="sec-5-3"><span class="section-number-3">5.3</span> Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations</h3>
765 <div class="outline-text-3" id="text-5-3">
767 <p>[26:56] But apart from that, the other basis of skepticism concerns
768 how well we understand what the problems are. I think there are many
769 people who try to formalize the problems of designing an intelligent
770 system in terms of streams of information thought of as bit streams or
771 collections of bit streams, and they think of as the problems of
772 intelligence as being the construction or detection of patterns in
773 those, and perhaps not just detection of patterns, but detection of
774 patterns that are useable for sending <i>out</i> streams to control motors
775 and so on in order to []. And that way of conceptualizing the problem
776 may lead on the one hand to oversimplification, so that the things
777 that <i>would</i> be achieved, if those goals were achieved, maybe much
778 simpler, in some ways inadequate. Or the replication of human
779 intelligence, or the matching of human intelligence&mdash;or for that
780 matter, squirrel intelligence&mdash;but in another way, it may also make
781 the problem harder: it may be that some of the kinds of things that
782 biological evolution has achieved can't be done that way. And one of
783 the ways that might turn out to be the case is not because it's not
784 impossible in principle to do some of the information processing on
785 artificial computers-based-on-transistors and other bit-manipulating
786 []&mdash;but it may just be that the computational complexity of solving
787 problems, processes, or finding solutions to complex problems, are
788 much greater and therefore you might need a much larger universe than
789 we have available in order to do things.
790 </p>
791 </div>
793 </div>
795 <div id="outline-container-5-4" class="outline-3">
796 <h3 id="sec-5-4"><span class="section-number-3">5.4</span> Example: find the shortest path by dangling strings</h3>
797 <div class="outline-text-3" id="text-5-4">
799 <p>[28:55] Then if the underlying mechanisms were different, the
800 information processing mechanisms, they might be better tailored to
801 particular sorts of computation. There's a [] example, which is
802 finding the shortest route if you've got a collection of roads, and
803 they may be curved roads, and lots of tangled routes from A to B to C,
804 and so on. And if you start at A and you want to get to Z &mdash; a place
805 somewhere on that map &mdash; the process of finding the shortest route
806 will involve searching through all these different possibilities and
807 rejecting some that are longer than others and so on. But if you make
808 a model of that map out of string, where these strings are all laid
809 out on the maps and so have the lengths of the routes. Then if you
810 hold the two knots in the string &ndash; it's a network of string &mdash; which
811 correspond to the start point and end point, then <i>pull</i>, then the
812 bits of string that you're left with in a straight line will give you
813 the shortest route, and that process of pulling just gets you the
814 solution very rapidly in a parallel computation, where all the others
815 just hang by the wayside, so to speak.
816 </p>
817 </div>
819 </div>
821 <div id="outline-container-5-5" class="outline-3">
822 <h3 id="sec-5-5"><span class="section-number-3">5.5</span> In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.</h3>
823 <div class="outline-text-3" id="text-5-5">
825 <p>[30:15] Now, I'm not saying brains can build networks of string and
826 pull them or anything like that; that's just an illustration of how if
827 you have the right representation, correctly implemented&mdash;or suitably
828 implemented&mdash;for a problem, then you can avoid very combinatorially
829 complex searches, which will maybe grow exponentially with the number
830 of components in your map, whereas with this thing, the time it takes
831 won't depend on how many strings you've [got on the map]; you just
832 pull, and it will depend only on the shortest route that exists in
833 there. Even if that shortest route wasn't obvious on the original map.
834 </p>
836 <p>
837 [30:59] So that's a rather long-winded way of formulating the
838 conjecture which&mdash;of supporting, a roundabout way of supporting the
839 conjecture that there may be something about the way molecules perform
840 computations where they have the combination of continuous change as
841 things move through space and come together and move apart, and
842 whatever &mdash; and also snap into states that then persist, so [as you
843 learn from] quantum mechanics, you can have stable molecular
844 structures which are quite hard to separate, and then in catalytic
845 processes you can separate them, or extreme temperatures, or strong
846 forces, but they may nevertheless be able to move very rapidly in some
847 conditions in order to perform computations.
848 </p>
849 <p>
850 [31:49] Now there may be things about that kind of structure that
851 enable searching for solutions to <i>certain</i> classes of problems to be
852 done much more efficiently (by brain) than anything we could do with
853 computers. It's just an open question.
854 </p>
855 <p>
856 [32:04] So it <i>might</i> turn out that we need new kinds of technology
857 that aren't on the horizon in order to replicate the functions that
858 animal brains perform &mdash;or, it might not. I just don't know. I'm not
859 claiming that there's strong evidence for that; I'm just saying that
860 it might turn out that way, partly because I think we know less than
861 many people think we know about what biological evolution achieved.
862 </p>
863 <p>
864 [32:28] There are some other possibilities: we may just find out that
865 there are shortcuts no one ever thought of, and it will all happen
866 much more quickly&mdash;I have an open mind; I'd be surprised, but it
867 could turn up. There <i>is</i> something that worries me much more than the
868 singularity that most people talk about, which is machines achieving
869 human-level intelligence and perhaps taking over [the] planet or
870 something. There's what I call the <i>singularity of cognitive catch-up</i> &hellip;
871 </p>
872 </div>
873 </div>
875 </div>
877 <div id="outline-container-6" class="outline-2">
878 <h2 id="sec-6"><span class="section-number-2">6</span> A singularity of cognitive catch-up</h2>
879 <div class="outline-text-2" id="text-6">
883 </div>
885 <div id="outline-container-6-1" class="outline-3">
886 <h3 id="sec-6-1"><span class="section-number-3">6.1</span> What if it will take a lifetime to learn enough to make something new?</h3>
887 <div class="outline-text-3" id="text-6-1">
889 <p>&hellip; SCC, singularity of cognitive catch-up, which I think we're close
890 to, or maybe have already reached&mdash;I'll explain what I mean by
891 that. One of the products of biological evolution&mdash;and this is one of
892 the answers to your earlier questions which I didn't get on to&mdash;is
893 that humans have not only the ability to make discoveries that none of
894 their ancestors have ever made, but to shorten the time required for
895 similar achievements to be reached by their offspring and their
896 descendants. So once we, for instance, worked out ways of complex
897 computations, or ways of building houses, or ways of finding our way
898 around, we don't need&hellip;our children don't need to work it out for
899 themselves by the same lengthy trial and error procedure; we can help
900 them get there much faster.
901 </p>
902 <p>
903 Okay, well, what I've been referring to as the singularity of
904 cognitive catch-up depends on the fact that&mdash;fairly obvious, and it's
905 often been commented on&mdash;that in case of humans, it's not necessary
906 for each generation to learn what previous generations learned <i>in the same way</i>. And we can speed up learning once something has been
907 learned, [it is able to] be learned by new people. And that has meant
908 that the social processes that support that kind of education of the
909 young can enormously accelerate what would have taken&hellip;perhaps
910 thousands [or] millions of years for evolution to produce, can happen in
911 a much shorter time.
912 </p>
914 <p>
915 [34:54] But here's the catch: in order for a new advance to happen ---
916 so for something new to be discovered that wasn't there before, like
917 Newtonian mechanics, or the theory of relativity, or Beethoven's music
918 or [style] or whatever &mdash; the individuals have to have traversed a
919 significant amount of what their ancestors have learned, even if they
920 do it much faster than their ancestors, to get to the point where they
921 can see the gaps, the possibilities for going further than their
922 ancestors, or their parents or whatever, have done.
923 </p>
924 <p>
925 [35:27] Now in the case of knowledge of science, mathematics,
926 philosophy, engineering and so on, there's been a lot of accumulated
927 knowledge. And humans are living a <i>bit</i> longer than they used to, but
928 they're still living for [whatever it is], a hundred years, or for
929 most people, less than that. So you can imagine that there might come
930 a time when in a normal human lifespan, it's not possible for anyone
931 to learn enough to understand the scope and limits of what's already
932 been achieved in order to see the potential for going beyond it and to
933 build on what's already been done to make that&hellip;those future steps.
934 </p>
935 <p>
936 [36:10] So if we reach that stage, we will have reached the
937 singularity of cognitive catch-up because the process of education
938 that enables individuals to learn faster than their ancestors did is
939 the catching-up process, and it may just be that we at some point
940 reach a point where catching up can only happen within a lifetime of
941 an individual, and after that they're dead and they can't go
942 beyond. And I have some evidence that there's a lot of that around
943 because I see a lot of people coming up with what <i>they</i> think of as
944 new ideas which they've struggled to come up with, but actually they
945 just haven't taken in some of what was&hellip;some of what was done [] by
946 other people, in other places before them. And I think that despite
947 the availability of search engines which make it <i>easier</i> for people
948 to get the information&mdash;for instance, when I was a student, if I
949 wanted to find out what other people had done in the field, it was a
950 laborious process&mdash;going to the library, getting books, and
951 &mdash;whereas now, I can often do things in seconds that would have taken
952 hours. So that means that if seconds [are needed] for that kind of
953 work, my lifespan has been extended by a factor of ten or
954 something. So maybe that <i>delays</i> the singularity, but it may not
955 delay it enough. But that's an open question; I don't know. And it may
956 just be that in some areas, this is more of a problem than others. For
957 instance, it may be that in some kinds of engineering, we're handing
958 over more and more of the work to machines anyways and they can go on
959 doing it. So for instance, most of the production of computers now is
960 done by a computer-controlled machine&mdash;although some of the design
961 work is done by humans&mdash; a lot of <i>detail</i> of the design is done by
962 computers, and they produce the next generation, which then produces
963 the next generation, and so on.
964 </p>
965 <p>
966 [37:57] I don't know if humans can go on having major advances, so
967 it'll be kind of sad if we can't.
968 </p>
969 </div>
970 </div>
972 </div>
974 <div id="outline-container-7" class="outline-2">
975 <h2 id="sec-7"><span class="section-number-2">7</span> Spatial reasoning: a difficult problem</h2>
976 <div class="outline-text-2" id="text-7">
979 <p>
980 [38:15] Okay, well, there are different problems [ ] mathematics, and
981 they have to do with properties. So for instance a lot of mathematics
982 that can be expressed in terms of logical structures or algebraic
983 structures and those are pretty well suited for manipulation and&hellip;on
984 computers, and if a problem can be specified using the
985 logical/algebraic notation, and the solution method requires creating
986 something in that sort of notation, then computers are pretty good,
987 and there are lots of mathematical tools around&mdash;there are theorem
988 provers and theorem checkers, and all kinds of things, which couldn't
989 have existed fifty, sixty years ago, and they will continue getting
990 better.
991 </p>
993 <p>
994 But there was something that I was <a href="#sec-3-4">alluding to earlier</a> when I gave the
995 example of how you can reason about what you will see by changing your
996 position in relation to a door, where what you are doing is using your
997 grasp of spatial structures and how as one spatial relationship
998 changes namely you come closer to the door or move sideways and
999 parallel to the wall or whatever, other spatial relationships change
1000 in parallel, so the lines from your eyes through to other parts of
1001 the&hellip;parts of the room on the other side of the doorway change,
1002 spread out more as you go towards the doorway, and as you move
1003 sideways, they don't spread out differently, but focus on different
1004 parts of the internal &hellip; that they access different parts of the
1005 &hellip; of the room.
1006 </p>
1007 <p>
1008 Now, those are examples of ways of thinking about relationships and
1009 changing relationships which are not the same as thinking about what
1010 happens if I replace this symbol with that symbol, or if I substitute
1011 this expression in that expression in a logical formula. And at the
1012 moment, I do not believe that there is anything in AI amongst the
1013 mathematical reasoning community, the theorem-proving community, that
1014 can model the processes that go on when a young child starts learning
1015 to do Euclidean geometry and is taught things about&mdash;for instance, I
1016 can give you a proof that the angles of any triangle add up to a
1017 straight line, 180 degrees.
1018 </p>
1020 </div>
1022 <div id="outline-container-7-1" class="outline-3">
1023 <h3 id="sec-7-1"><span class="section-number-3">7.1</span> Example: Spatial proof that the angles of any triangle add up to a half-circle</h3>
1024 <div class="outline-text-3" id="text-7-1">
1026 <p>There are standard proofs which involves starting with one triangle,
1027 then adding a line parallel to the base one of my former students,
1028 Mary Pardoe, came up with which I will demonstrate with this &lt;he holds
1029 up a pen&gt; &mdash; can you see it? If I have a triangle here that's got
1030 three sides, if I put this thing on it, on one side &mdash; let's say the
1031 bottom&mdash;I can rotate it until it lies along the second&hellip;another
1032 side, and then maybe move it up to the other end ~. Then I can rotate
1033 it again, until it lies on the third side, and move it back to the
1034 other end. And then I'll rotate it again and it'll eventually end up
1035 on the original side, but it will have changed the direction it's
1036 pointing in &mdash; and it won't have crossed over itself so it will have
1037 gone through a half-circle, and that says that the three angles of a
1038 triangle add up to the rotations of half a circle, which is a
1039 beautiful kind of proof and almost anyone can understand it. Some
1040 mathematicians don't like it, because they say it hides some of the
1041 assumptions, but nevertheless, as far as I'm concerned, it's an
1042 example of a human ability to do reasoning which, once you've
1043 understood it, you can see will apply to any triangle &mdash; it's got to
1044 be a planar triangle &mdash; not a triangle on a globe, because then the
1045 angles can add up to more than &hellip; you can have three <i>right</i> angles
1046 if you have an equator&hellip;a line on the equator, and a line going up to
1047 to the north pole of the earth, and then you have a right angle and
1048 then another line going down to the equator, and you have a right
1049 angle, right angle, right angle, and they add up to more than a
1050 straight line. But that's because the triangle isn't in the plane,
1051 it's on a curved surface. In fact, that's one of the
1052 differences&hellip;definitional differences you can take between planar and
1053 curved surfaces: how much the angles of a triangle add up to. But our
1054 ability to <i>visualize</i> and notice the generality in that process, and
1055 see that you're going to be able to do the same thing using triangles
1056 that stretch in all sorts of ways, or if it's a million times as
1057 large, or if it's made&hellip;you know, written on, on&hellip;if it's drawn in
1058 different colors or whatever &mdash; none of that's going to make any
1059 difference to the essence of that process. And that ability to see
1060 the commonality in a spatial structure which enables you to draw some
1061 conclusions with complete certainty&mdash;subject to the possibility that
1062 sometimes you make mistakes, but when you make mistakes, you can
1063 discover them, as has happened in the history of geometrical theorem
1064 proving. Imre Lakatos had a wonderful book called <a href="http://en.wikipedia.org/wiki/Proofs_and_Refutations"><i>Proofs and Refutations</i></a> &mdash; which I won't try to summarize &mdash; but he has
1065 examples: mistakes were made; that was because people didn't always
1066 realize there were subtle subcases which had slightly different
1067 properties, and they didn't take account of that. But once they're
1068 noticed, you rectify that.
1069 </p>
1070 </div>
1072 </div>
1074 <div id="outline-container-7-2" class="outline-3">
1075 <h3 id="sec-7-2"><span class="section-number-3">7.2</span> Geometric results are fundamentally different than experimental results in chemistry or physics.</h3>
1076 <div class="outline-text-3" id="text-7-2">
1078 <p>[43:28] But it's not the same as doing experiments in chemistry and
1079 physics, where you can't be sure it'll be the same on [] or at a high
1080 temperature, or in a very strong magnetic field &mdash; with geometric
1081 reasoning, in some sense you've got the full information in front of
1082 you; even if you don't always notice an important part of it. So, that
1083 kind of reasoning (as far as I know) is not implemented anywhere in a
1084 computer. And most people who do research on trying to model
1085 mathematical reasoning, don't pay any attention to that, because of
1086 &hellip; they just don't think about it. They start from somewhere else,
1087 maybe because of how they were educated. I was taught Euclidean
1088 geometry at school. Were you?
1089 </p>
1090 <p>
1091 (Adam ford: Yeah)
1092 </p>
1093 <p>
1094 Many people are not now. Instead they're taught set theory, and
1095 logic, and arithmetic, and [algebra], and so on. And so they don't use
1096 that bit of their brains, without which we wouldn't have built any of
1097 the cathedrals, and all sorts of things we now depend on.
1098 </p>
1099 </div>
1100 </div>
1102 </div>
1104 <div id="outline-container-8" class="outline-2">
1105 <h2 id="sec-8"><span class="section-number-2">8</span> Is near-term artificial general intelligence likely?</h2>
1106 <div class="outline-text-2" id="text-8">
1110 </div>
1112 <div id="outline-container-8-1" class="outline-3">
1113 <h3 id="sec-8-1"><span class="section-number-3">8.1</span> Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.</h3>
1114 <div class="outline-text-3" id="text-8-1">
1117 <p>
1118 [44:35] Well, this relates to what's meant by general. And when I
1119 first encountered the AGI community, I thought that what they all
1120 meant by general intelligence was <i>uniform</i> intelligence ---
1121 intelligence based on some common simple (maybe not so simple, but)
1122 single powerful mechanism or principle of inference. And there are
1123 some people in the community who are trying to produce things like
1124 that, often in connection with algorithmic information theory and
1125 computability of information, and so on. But there's another sense of
1126 general which means that the system of general intelligence can do
1127 lots of different things, like perceive things, understand language,
1128 move around, make things, and so on &mdash; perhaps even enjoy a joke;
1129 that's something that's not nearly on the horizon, as far as I
1130 know. Enjoying a joke isn't the same as being able to make laughing
1131 noises.
1132 </p>
1133 <p>
1134 Given, then, that there are these two notions of general
1135 intelligence&mdash;there's one that looks for one uniform, possibly
1136 simple, mechanism or collection of ideas and notations and algorithms,
1137 that will deal with any problem that's solvable &mdash; and the other
1138 that's general in the sense that it can do lots of different things
1139 that are combined into an integrated architecture (which raises lots
1140 of questions about how you combine these things and make them work
1141 together) and we humans, certainly, are of the second kind: we do all
1142 sorts of different things, and other animals also seem to be of the
1143 second kind, perhaps not as general as humans. Now, it may turn out
1144 that in some near future time, who knows&mdash;decades, a few
1145 decades&mdash;you'll be able to get machines that are capable of solving
1146 in a time that will depend on the nature of the problem, but any
1147 problem that is solvable, and they will be able to do it in some sort
1148 of tractable time &mdash; of course, there are some problems that are
1149 solvable that would require a larger universe and a longer history
1150 than the history of the universe, but apart from that constraint,
1151 these machines will be able to do anything []. But to be able to do
1152 some of the kinds of things that humans can do, like the kinds of
1153 geometrical reasoning where you look at the shape and you abstract
1154 away from the precise angles and sizes and shapes and so on, and
1155 realize there's something general here, as must have happened when our
1156 ancestors first made the discoveries that eventually put together in
1157 Euclidean geometry.
1158 </p>
1159 <p>
1160 It may be that that requires mechanisms of a kind that we don't know
1161 anything about at the moment. Maybe brains are using molecules and
1162 rearranging molecules in some way that supports that kind of
1163 reasoning. I'm not saying they are &mdash; I don't know, I just don't see
1164 any simple&hellip;any obvious way to map that kind of reasoning capability
1165 onto what we currently do on computers. There is&mdash;and I just
1166 mentioned this briefly beforehand&mdash;there is a kind of thing that's
1167 sometimes thought of as a major step in that direction, namely you can
1168 build a machine (or a software system) that can represent some
1169 geometrical structure, and then be told about some change that's going
1170 to happen to it, and it can predict in great detail what'll
1171 happen. And this happens for instance in game engines, where you say
1172 we have all these blocks on the table and I'll drop one other block,
1173 and then [the thing] uses Newton's laws and properties of rigidity of
1174 the parts and the elasticity and also stuff about geometries and space
1175 and so on, to give you a very accurate representation of what'll
1176 happen when this brick lands on this pile of things, [it'll bounce and
1177 go off, and so on]. And you just, with more memory and more CPU power,
1178 you can increase the accuracy&mdash; but that's totally different than
1179 looking at <i>one</i> example, and working out what will happen in a whole
1180 <i>range</i> of cases at a higher level of abstraction, whereas the game
1181 engine does it in great detail for <i>just</i> this case, with <i>just</i> those
1182 precise things, and it won't even know what the generalizations are
1183 that it's using that would apply to others []. So, in that sense, [we]
1184 may get AGI &mdash; artificial general intelligence &mdash; pretty soon, but
1185 it'll be limited in what it can do. And the other kind of general
1186 intelligence which combines all sorts of different things, including
1187 human spatial geometrical reasoning, and maybe other things, like the
1188 ability to find things funny, and to appreciate artistic features and
1189 other things may need forms of pattern-mechanism, and I have an open
1190 mind about that.
1191 </p>
1192 </div>
1193 </div>
1195 </div>
1197 <div id="outline-container-9" class="outline-2">
1198 <h2 id="sec-9"><span class="section-number-2">9</span> Abstract General Intelligence impacts</h2>
1199 <div class="outline-text-2" id="text-9">
1202 <p>
1203 [49:53] Well, as far as the first type's concerned, it could be useful
1204 for all kinds of applications &mdash; there are people who worry about
1205 where there's a system that has that type of intelligence, might in
1206 some sense take over control of the planet. Well, humans often do
1207 stupid things, and they might do something stupid that would lead to
1208 disaster, but I think it's more likely that there would be other
1209 things [] lead to disaster&mdash; population problems, using up all the
1210 resources, destroying ecosystems, and whatever. But certainly it would
1211 go on being useful to have these calculating devices. Now, as for the
1212 second kind of them, I don't know&mdash;if we succeeded at putting
1213 together all the parts that we find in humans, we might just make an
1214 artificial human, and then we might have some of them as your friends,
1215 and some of them we might not like, and some of them might become
1216 teachers or whatever, composers &mdash; but that raises a question: could
1217 they, in some sense, be superior to us, in their learning
1218 capabilities, their understanding of human nature, or maybe their
1219 wickedness or whatever &mdash; these are all issues in which I expect the
1220 best science fiction writers would give better answers than anything I
1221 could do, but I did once fantasize when I [back] in 1978, that perhaps
1222 if we achieved that kind of thing, that they would be wise, and gentle
1223 and kind, and realize that humans are an inferior species that, you
1224 know, have some good features, so they'd keep us in some kind of
1225 secluded&hellip;restrictive kind of environment, keep us away from
1226 dangerous weapons, and so on. And find ways of cohabitating with
1227 us. But that's just fantasy.
1228 </p>
1229 <p>
1230 Adam Ford: Awesome. Yeah, there's an interesting story <i>With Folded Hands</i> where [the computers] want to take care of us and want to
1231 reduce suffering and end up lobotomizing everybody [but] keeping them
1232 alive so as to reduce the suffering.
1233 </p>
1234 <p>
1235 Aaron Sloman: Not all that different from <i>Brave New World</i>, where it
1236 was done with drugs and so on, but different humans are given
1237 different roles in that system, yeah.
1238 </p>
1239 <p>
1240 There's also <i>The Time Machine</i>, H.G. Wells, where the &hellip; in the
1241 distant future, humans have split in two: the Eloi, I think they were
1242 called, they lived underground, they were the [] ones, and then&mdash;no,
1243 the Morlocks lived underground; Eloi lived on the planet; they were
1244 pleasant and pretty but not very bright, and so on, and they were fed
1245 on by &hellip;
1246 </p>
1247 <p>
1248 Adam Ford: [] in the future.
1249 </p>
1250 <p>
1251 Aaron Sloman: As I was saying, if you ask science fiction writers,
1252 you'll probably come up with a wide variety of interesting answers.
1253 </p>
1254 <p>
1255 Adam Ford: I certainly have; I've spoken to [] of Birmingham, and
1256 Sean Williams, &hellip; who else?
1257 </p>
1258 <p>
1259 Aaron Sloman: Did you ever read a story by E.M. Forrester called <i>The Machine Stops</i> &mdash; very short story, it's <a href="http://archive.ncsa.illinois.edu/prajlich/forster.html">on the Internet somewhere</a>
1260 &mdash; it's about a time when people sitting &hellip; and this was written in
1261 about [1914 ] so it's about&hellip;over a hundred years ago &hellip; people are
1262 in their rooms, they sit in front of screens, and they type things,
1263 and they communicate with one another that way, and they don't meet;
1264 they have debates, and they give lectures to their audiences that way,
1265 and then there's a woman whose son says &ldquo;I'd like to see
1266 you&rdquo; and she says &ldquo;What's the point? You've got me at
1267 this point &rdquo; but he wants to come and talk to her &mdash; I won't
1268 tell you how it ends, but.
1269 </p>
1270 <p>
1271 Adam Ford: Reminds me of the Internet.
1272 </p>
1273 <p>
1274 Aaron Sloman: Well, yes; he invented &hellip; it was just extraordinary
1275 that he was able to do that, before most of the components that we
1276 need for it existed.
1277 </p>
1278 <p>
1279 Adam Ford: [Another person who did that] was Vernor Vinge [] <i>True Names</i>.
1280 </p>
1281 <p>
1282 Aaron Sloman: When was that written?
1283 </p>
1284 <p>
1285 Adam Ford: The seventies.
1286 </p>
1287 <p>
1288 Aaron Sloman: Okay, well a lot of the technology was already around
1289 then. The original bits of internet were working, in about 1973, I was
1290 sitting &hellip; 1974, I was sitting at Sussex University trying to
1291 use&hellip;learn LOGO, the programming language, to decide whether it was
1292 going to be useful for teaching AI, and I was sitting [] paper
1293 teletype, there was paper coming out, transmitting ten characters a
1294 second from Sussex to UCL computer lab by telegraph cable, from there
1295 to somewhere in Norway via another cable, from there by satellite to
1296 California to a computer Xerox [] research center where they had
1297 implemented a computer with a LOGO system on it, with someone I had
1298 met previously in Edinburgh, Danny Bobrow, and he allowed me to have
1299 access to this sytem. So there I was typing. And furthermore, it was
1300 duplex typing, so every character I typed didn't show up on my
1301 terminal until it had gone all the way there and echoed back, so I
1302 would type, and the characters would come back four seconds later.
1303 </p>
1304 <p>
1305 [55:26] But that was the Internet, and I think Vernor Vinge was
1306 writing after that kind of thing had already started, but I don't
1307 know. Anyway.
1308 </p>
1309 <p>
1310 [55:41] Another&hellip;I mentioned H.G. Wells, <i>The Time Machine</i>. I
1311 recently discovered, because <a href="http://en.wikipedia.org/wiki/David_Lodge_(author)">David Lodge</a> had written a sort of
1312 semi-novel about him, that he had invented Wikipedia, in advance&mdash; he
1313 had this notion of an encyclopedia that was free to everybody, and
1314 everybody could contribute and [collaborate on it]. So, go to the
1315 science fiction writers to find out the future &mdash; well, a range of
1316 possible futures.
1317 </p>
1318 <p>
1319 Adam Ford: Well the thing is with science fiction writers, they have
1320 to maintain some sort of interest for their readers, after all the
1321 science fiction which reaches us is the stuff that publishers want to
1322 sell, and so there's a little bit of a &hellip; a bias towards making a
1323 plot device there, and so the dramatic sort of appeals to our
1324 amygdala, our lizard brain; we'll sort of stay there obviously to some
1325 extent. But I think that they do come up with sort of amazing ideas; I
1326 think it's worth trying to make these predictions; I think that we
1327 should more time on strategic forecasting, I mean take that seriously.
1328 </p>
1329 <p>
1330 Aaron Sloman: Well, I'm happy to leave that to others; I just want to
1331 try to understand these problems that bother me about how things
1332 work. And it may be that some would say that's irresponsible if I
1333 don't think about what the implications will be. Well, understanding
1334 how humans work <i>might</i> enable us to make [] humans &mdash; I suspect it
1335 wont happen in this century; I think it's going to be too difficult.
1336 </p></div>
1337 </div>
1338 </div>
1340 <div id="postamble">
1341 <p class="date">Date: 2013-10-04 18:49:53 UTC</p>
1342 <p class="author">Author: Dylan Holmes</p>
1343 <p class="creator">Org version 7.7 with Emacs version 23</p>
1344 <a href="http://validator.w3.org/check?uri=referer">Validate XHTML 1.0</a>
1346 </div>
1347 </body>
1348 </html>