rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109:

Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview

rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109:

rlm@109: Editor's note: This is a working draft transcript which I made of rlm@109: this nice interview of Aaron Sloman. Having just finished one rlm@109: iteration of transcription, I still need to go in and clean up the rlm@109: formatting and fix the parts that I misheard, so you can expect the rlm@109: text to improve significantly in the near future. rlm@109:

rlm@109:

rlm@109: To the extent that this is my work, you have my permission to make rlm@109: copies of this transcript for your own purposes. Also, feel free to rlm@109: e-mail me with comments or corrections. rlm@109:

rlm@109:

rlm@109: You can send mail to transcript@aurellem.org. rlm@109:

rlm@109:

rlm@109: Cheers, rlm@109:

rlm@109:

rlm@109: —Dylan rlm@109:

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109:

Table of Contents

rlm@109:
rlm@109: rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109:

1 Introduction

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

1.1 Aaron Sloman evolves into a philosopher of AI

rlm@109:
rlm@109: rlm@109:

[0:09] My name is Aaron Sloman. My first degree many years ago in rlm@109: Capetown University was in Physics and Mathematics, and I intended to rlm@109: go and be a mathematician. I came to Oxford and encountered rlm@109: philosophers — I had started reading philosophy and discussing rlm@109: philosophy before then, and then I found that there were philosophers rlm@109: who said things about mathematics that I thought were wrong, so rlm@109: gradually got more and more involved in [philosophy] discussions and rlm@109: switched to doing philosophy DPhil. Then I became a philosophy rlm@109: lecturer and about six years later, I was introduced to artificial rlm@109: intelligence when I was a lecturer at Sussex University in philosophy rlm@109: and I very soon became convinced that the best way to make progress in rlm@109: both areas of philosophy (including philosophy of mathematics which I rlm@109: felt i hadn't dealt with adequately in my DPhil) about the philosophy rlm@109: of mathematics, philosophy of mind, philsophy of language and all rlm@109: those things—the best way was to try to design and test working rlm@109: fragments of mind and maybe eventually put them all together but rlm@109: initially just working fragments that would do various things. rlm@109:

rlm@109:

rlm@109: [1:12] And I learned to program and ~ with various other people rlm@109: including ~Margaret Boden whom you've interviewed, developed—helped rlm@109: develop an undergraduate degree in AI and other things and also began rlm@109: to do research in AI and so on which I thought of as doing philosophy, rlm@109: primarily. rlm@109:

rlm@109:

rlm@109: [1:29] And then I later moved to the University of Birmingham and I rlm@109: was there — I came in 1991 — and I've been retired for a while but rlm@109: I'm not interested in golf or gardening so I just go on doing full rlm@109: time research and my department is happy to keep me on without paying rlm@109: me and provide space and resources and I come, meeting bright people rlm@109: at conferences and try to learn and make progress if I can. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

1.2 AI is hard, in part because there are tempting non-problems.

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: One of the things I learnt and understood more and more over the many rlm@109: years — forty years or so since I first encountered AI — is how rlm@109: hard the problems are, and in part that's because it's very often rlm@109: tempting to think the problem is something different from what it rlm@109: actually is, and then people design solutions to the non-problems, and rlm@109: I think of most of my work now as just helping to clarify what the rlm@109: problems are: what is it that we're trying to explain — and maybe rlm@109: this is leading into what you wanted to talk about: rlm@109:

rlm@109:

rlm@109: I now think that one of the ways of getting a deep understanding of rlm@109: that is to find out what were the problems that biological evolution rlm@109: solved, because we are a product of many solutions to many rlm@109: problems, and if we just try to go in and work out what the whole rlm@109: system is doing, we may get it all wrong, or badly wrong. rlm@109:

rlm@109: rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

2 What problems of intelligence did evolution solve?

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

2.1 Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: [2:57] Well, first I would challenge that we are the dominant rlm@109: species. I know it looks like that but actually if you count biomass, rlm@109: if you count number of species, if you count number of individuals, rlm@109: the dominant species are microbes — maybe not one of them but anyway rlm@109: they're the ones who dominate in that sense, and furthermore we are rlm@109: mostly — we are largely composed of microbes, without which we rlm@109: wouldn't survive. rlm@109:

rlm@109: rlm@109:

rlm@109: [3:27] But there are things that make humans (you could say) best at rlm@109: those things, or worst at those things, but it's a combination. And I rlm@109: think it was a collection of developments of which there isn't any rlm@109: single one. [] there might be, some people say, human language which rlm@109: changed everything. By our human language, they mean human rlm@109: communication in words, but I think that was a later development from rlm@109: what must have started as the use of internal forms of rlm@109: representation — which are there in nest-building birds, in rlm@109: pre-verbal children, in hunting mammals — because you can't take in rlm@109: information about a complex structured environment in which things can rlm@109: change and you may have to be able to work out what's possible and rlm@109: what isn't possible, without having some way of representing the rlm@109: components of the environment, their relationships, the kinds of rlm@109: things they can and can't do, the kinds of things you might or might rlm@109: not be able to do — and that kind of capability needs internal rlm@109: languages, and I and colleagues [at Birmingham] have been referring to rlm@109: them as generalized languages because some people object to rlm@109: referring…to using language to refer to something that isn't used rlm@109: for communication. But from that viewpoint, not only humans but many rlm@109: other animals developed abilities to do things to their environment to rlm@109: make them more friendly to themselves, which depended on being able to rlm@109: represent possible futures, possible actions, and work out what's the rlm@109: best thing to do. rlm@109:

rlm@109:

rlm@109: [5:13] And nest-building in corvids for instance—crows, magpies, rlm@109: [hawks], and so on — are way beyond what current robots can do, and rlm@109: in fact I think most humans would be challenged if they had to go and rlm@109: find a collection of twigs, one at a time, maybe bring them with just rlm@109: one hand — or with your mouth — and assemble them into a rlm@109: structure that, you know, is shaped like a nest, and is fairly rigid, rlm@109: and you could trust your eggs in them when wind blows. But they're rlm@109: doing it, and so … they're not our evolutionary ancestors, but rlm@109: they're an indication — and that example is an indication — of rlm@109: what must have evolved in order to provide control over the rlm@109: environment in that species. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

2.2 Speculation about how communication might have evolved from internal lanagues.

rlm@109:
rlm@109: rlm@109:

[5:56] And I think hunting mammals, fruit-picking mammals, mammals rlm@109: that can rearrange parts of the environment, provide shelters, needed rlm@109: to have …. also needed to have ways of representing possible rlm@109: futures, not just what's there in the environment. I think at a later rlm@109: stage, that developed into a form of communication, or rather the rlm@109: internal forms of representation became usable as a basis for rlm@109: providing [context] to be communicated. And that happened, I think, rlm@109: initially through performing actions that expressed intentions, and rlm@109: probably led to situtations where an action (for instance, moving some rlm@109: large object) was performed more easily, or more successfully, or more rlm@109: accurately if it was done collaboratively. So someone who had worked rlm@109: out what to do might start doing it, and then a conspecific might be rlm@109: able to work out what the intention is, because that person has the rlm@109: same forms of representation and can build theories about what's rlm@109: going on, and might then be able to help. rlm@109:

rlm@109:

rlm@109: [7:11] You can imagine that if that started happening more (a lot of rlm@109: collaboration based on inferred intentions and plans) then sometimes rlm@109: the inferences might be obscure and difficult, so the actions might rlm@109: be enhanced to provide signals as to what the intention is, and what rlm@109: the best way is to help, and so on. rlm@109:

rlm@109:

rlm@109: [7:35] So, this is all handwaving and wild speculation, but I think rlm@109: it's consistent with a large collection of facts which one can look at rlm@109: — and find if one looks for them, but one won't know if [some]one rlm@109: doesn't look for them — about the way children, for instance, who rlm@109: can't yet talk, communicate, and the things they'll do, like going to rlm@109: the mother and turning the face to point in the direction where the rlm@109: child wants it to look and so on; that's an extreme version of action rlm@109: indicating intention. rlm@109:

rlm@109:

rlm@109: [8:03] Anyway. That's a very long roundabout answer to one conjecture rlm@109: that the use of communicative language is what gave humans their rlm@109: unique power to create and destroy and whatever, and I'm saying that rlm@109: if by that you mean communicative language, then I'm saying there rlm@109: was something before that which was non-communicative language, and I rlm@109: suspect that noncommunicative language continues to play a deep role rlm@109: in all human perception —in mathematical and scientific reasoning, in rlm@109: problem solving — and we don't understand very much about it. rlm@109:

rlm@109:

rlm@109: [8:48] rlm@109: I'm sure there's a lot more to be said about the development of rlm@109: different kinds of senses, the development of brain structures and rlm@109: mechanisms is above all that, but perhaps I've droned on long enough rlm@109: on that question. rlm@109:

rlm@109: rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

3 How do language and internal states relate to AI?

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: [9:09] Well, I think most of the human and animal capabilities that rlm@109: I've been referring to are not yet to be found in current robots or rlm@109: [computing] systems, and I think there are two reasons for that: one rlm@109: is that it's intrinsically very difficult; I think that in particular rlm@109: it may turn out that the forms of information processing that one can rlm@109: implement on digital computers as we currently know them may not be as rlm@109: well suited to performing some of these tasks as other kinds of rlm@109: computing about which we don't know so much — for example, I think rlm@109: there may be important special features about chemical computers rlm@109: which we might [talk about in a little bit? find out about]. rlm@109:

rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

3.1 In AI, false assumptions can lead investigators astray.

rlm@109:
rlm@109: rlm@109:

[9:57] So, one of the problems then is that the tasks are hard … but rlm@109: there's a deeper problem as to why AI hasn't made a great deal of rlm@109: progress on these problems that I'm talking about, and that is that rlm@109: most AI researchers assume things—and this is not just AI rlm@109: researchers, but [also] philsophers, and psychologists, and people rlm@109: studying animal behavior—make assumptions about what it is that rlm@109: animals or humans do, for instance make assumptions about what vision rlm@109: is for, or assumptions about what motivation is and how motivation rlm@109: works, or assumptions about how learning works, and then they try --- rlm@109: the AI people try — to model [or] build systems that perform those rlm@109: assumed functions. So if you get the functions wrong, then even if rlm@109: you implement some of the functions that you're trying to implement, rlm@109: they won't necessarily perform the tasks that the initial objective rlm@109: was to imitate, for instance the tasks that humans, and nest-building rlm@109: birds, and monkeys and so on can perform. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

3.2 Example: Vision is not just about finding surfaces, but about finding affordances.

rlm@109:
rlm@109: rlm@109:

[11:09] I'll give you a simple example — well, maybe not so simple, rlm@109: but — It's often assumed that the function of vision in humans (and rlm@109: in other animals with good eyesight and so on) is to take in optical rlm@109: information that hits the retina, and form into the (maybe changing rlm@109: — or, really, in our case definitely changing) patterns of rlm@109: illumination where there are sensory receptors that detect those rlm@109: patterns, and then somehow from that information (plus maybe other rlm@109: information gained from head movement or from comparisons between two rlm@109: eyes) to work out what there was in the environment that produced rlm@109: those patterns, and that is often taken to mean “where were the rlm@109: surfaces off which the light bounced before it came to me”. So rlm@109: you essentially think of the task of the visual system as being to rlm@109: reverse the image formation process: so the 3D structure's there, the rlm@109: lens causes the image to form in the retina, and then the brain goes rlm@109: back to a model of that 3D structure there. That's a very plausible rlm@109: theory about vision, and it may be that that's a subset of what rlm@109: human vision does, but I think James Gibson pointed out that that kind rlm@109: of thing is not necessarily going to be very useful for an organism, rlm@109: and it's very unlikely that that's the main function of perception in rlm@109: general, namely to produce some physical description of what's out rlm@109: there. rlm@109:

rlm@109:

rlm@109: [12:37] What does an animal need? It needs to know what it can do, rlm@109: what it can't do, what the consequences of its actions will be rlm@109: …. so, he introduced the word affordance, so from his point of rlm@109: view, the function of vision, perception, are to inform the organism rlm@109: of what the affordances are for action, where that would mean what rlm@109: the animal, given its morphology (what it can do with its mouth, its rlm@109: limbs, and so on, and the ways it can move) what it can do, what its rlm@109: needs are, what the obstacles are, and how the environment supports or rlm@109: obstructs those possible actions. rlm@109:

rlm@109:

rlm@109: [13:15] And that's a very different collection of information rlm@109: structures that you need from, say, “where are all the rlm@109: surfaces?”: if you've got all the surfaces, deriving the rlm@109: affordances would still be a major task. So, if you think of the rlm@109: perceptual system as primarily (for biological organisms) being rlm@109: devices that provide information about affordances and so on, then the rlm@109: tasks look very different. And most of the people working, doing rlm@109: research on computer vision in robots, I think haven't taken all that rlm@109: on board, so they're trying to get machines to do things which, even rlm@109: if they were successful, would not make the robots very intelligent rlm@109: (and in fact, even the ones they're trying to do are not really easy rlm@109: to do, and they don't succeed very well— although, there's progress; rlm@109: I shouldn't disparage it too much.) rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

3.3 Online and offline intelligence

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: [14:10] It gets more complex as animals get more sophisticated. So, I rlm@109: like to make a distinction between online intelligence and offline rlm@109: intelligence. So, for example, if I want to pick something up — like rlm@109: this leaf <he plucks a leaf from the table> — I was able to select rlm@109: it from all the others in there, and while moving my hand towards it, rlm@109: I was able to guide its trajectory, making sure it was going roughly rlm@109: in the right direction — as opposed to going out there, which rlm@109: wouldn't have been able to pick it up — and these two fingers ended rlm@109: up with a portion of the leaf between them, so that I was able to tell rlm@109: when I'm ready to do that <he clamps the leaf between two fingers> rlm@109: and at that point, I clamped my fingers and then I could pick up the rlm@109: leaf. rlm@109:

rlm@109:

rlm@109: [14:54] Whereas, — and that's an example of online intelligence: rlm@109: during the performance of an action (both from the stage where it's rlm@109: initiated, and during the intermediate stages, and where it's rlm@109: completed) I'm taking in information relevant to controlling all those rlm@109: stages, and that relevant information keeps changing. That means I rlm@109: need stores of transient information which gets discarded almost rlm@109: immediately and replaced or something. That's online intelligence. And rlm@109: there are many forms; that's just one example, and Gibson discussed rlm@109: quite a lot of examples which I won't try to replicate now. rlm@109:

rlm@109:

rlm@109: [15:30] But in offline intelligence, you're not necessarily actually rlm@109: performing the actions when you're using your intelligence; you're rlm@109: thinking about possible actions. So, for instance, I could think rlm@109: about how fast or by what route I would get back to the lecture room rlm@109: if I wanted to [get to the next talk] or something. And I know where rlm@109: the door is, roughly speaking, and I know roughly which route I would rlm@109: take, when I go out, I should go to the left or to the right, because rlm@109: I've stored information about where the spaces are, where the rlm@109: buildings are, where the door was that we came out — but in using rlm@109: that information to think about that route, I'm not actually rlm@109: performing the action. I'm not even simulating it in detail: the rlm@109: precise details of direction and speed and when to clamp my fingers, rlm@109: or when to contract my leg muscles when walking, are all irrelevant to rlm@109: thinking about a good route, or thinking about the potential things rlm@109: that might happen on the way. Or what would be a good place to meet rlm@109: someone who I think [for an acquaintance in particular] — [barber] rlm@109: or something — I don't necessarily have to work out exactly where rlm@109: the person's going to stand, or from what angle I would recognize rlm@109: them, and so on. rlm@109:

rlm@109:

rlm@109: [16:46] So, offline intelligence — which I think became not just a rlm@109: human competence; I think there are other animals that have aspects of rlm@109: it: Squirrels are very impressive as you watch them. Gray squirrels at rlm@109: any rate, as you watch them defeating squirrel-proof birdfeeders, seem rlm@109: to have a lot of that [offline intelligence], as well as the online rlm@109: intelligence when they eventually perform the action they've worked rlm@109: out [] that will get them to the nuts. rlm@109:

rlm@109:

rlm@109: [17:16] And I think that what happened during our evolution is that rlm@109: mechanisms for acquiring and processing and storing and manipulating rlm@109: information that is more and more remote from the performance of rlm@109: actions developed. An example is taking in information about where rlm@109: locations are that you might need to go to infrequently: There's a rlm@109: store of a particular type of material that's good for building on rlm@109: roofs of houses or something out around there in some rlm@109: direction. There's a good place to get water somewhere in another rlm@109: direction. There are people that you'd like to go and visit in rlm@109: another place, and so on. rlm@109:

rlm@109:

rlm@109: [17:59] So taking in information about an extended environment and rlm@109: building it into a structure that you can make use of for different rlm@109: purposes is another example of offline intelligence. And when we do rlm@109: that, we sometimes use only our brains, but in modern times, we also rlm@109: learned how to make maps on paper and walls and so on. And it's not rlm@109: clear whether the stuff inside our heads has the same structures as rlm@109: the maps we make on paper: the maps on paper have a different rlm@109: function; they may be used to communicate with others, or meant for rlm@109: looking at, whereas the stuff in your head you don't look at; you rlm@109: use it in some other way. rlm@109:

rlm@109:

rlm@109: [18:46] So, what I'm getting at is that there's a great deal of human rlm@109: intelligence (and animal intelligence) which is involved in what's rlm@109: possible in the future, what exists in distant places, what might have rlm@109: happened in the past (sometimes you need to know why something is as rlm@109: it is, because that might be relevant to what you should or shouldn't rlm@109: do in the future, and so on), and I think there was something about rlm@109: human evolution that extended that offline intelligence way beyond rlm@109: that of animals. And I don't think it was just human language, (but rlm@109: human language had something to do with it) but I think there was rlm@109: something else that came earlier than language which involves the rlm@109: ability to use your offline intelligence to discover something that rlm@109: has a rich mathematical structure. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

3.4 Example: Even toddlers use sophisticated geometric knowledge

rlm@109:
rlm@109: rlm@109:

[19:44] I'll give you a simple example: if you look through a gap, you rlm@109: can see something that's on the other side of the gap. Now, you rlm@109: might see what you want to see, or you might see only part of it. If rlm@109: you want to see more of it, which way would you move? Well, you could rlm@109: either move sideways, and see through the gap—and see it roughly rlm@109: the same amount but a different part of it [if it's a ????], or you rlm@109: could move towards the gap and then your view will widen as you rlm@109: approach the gap. Now, there's a bit of mathematics in there, insofar rlm@109: as you are implicitly assuming that information travels in straight rlm@109: lines, and as you go closer to a gap, the straight lines that you can rlm@109: draw from where you are through the gap, widen as you approach that rlm@109: gap. Now, there's a kind of theorem of Euclidean geometry in there rlm@109: which I'm not going to try to state very precisely (and as far as I rlm@109: know, wasn't stated explicitly in Euclidean geometry) but it's rlm@109: something every toddler— human toddler—learns. (Maybe other rlm@109: animals also know it, I don't know.) But there are many more things, rlm@109: actions to perform, to get you more information about things, actions rlm@109: to perform to conceal information from other people, actions that will rlm@109: enable you to operate, to act on a rigid object in one place in order rlm@109: to produce an effect on another place. So, there's a lot of stuff that rlm@109: involves lines and rotations and angles and speeds and so on that I rlm@109: think humans (maybe, to a lesser extent, other animals) develop the rlm@109: ability to think about in a generic way. That means that you could rlm@109: take out the generalizations from the particular contexts and then rlm@109: re-use them in a new contexts in ways that I think are not yet rlm@109: represented at all in AI and in theories of human learning in any [] rlm@109: way — although some people are trying to study learning of mathematics. rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

4 Animal intelligence

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

4.1 The priority is cataloguing what competences have evolved, not ranking them.

rlm@109:
rlm@109: rlm@109:

[22:03] I wasn't going to challenge the claim that humans can do more rlm@109: sophisticated forms of [tracking], just to mention that there are some rlm@109: things that other animals can do which are in some ways comparable, rlm@109: and some ways superior to [things] that humans can do. In particular, rlm@109: there are species of birds and also, I think, some rodents --- rlm@109: squirrels, or something — I don't know enough about the variety --- rlm@109: that can hide nuts and remember where they've hidden them, and go back rlm@109: to them. And there have been tests which show that some birds are able rlm@109: to hide tens — you know, [eighteen] or something nuts — and to rlm@109: remember which ones have been taken, which ones haven't, and so rlm@109: on. And I suspect most humans can't do that. I wouldn't want to say rlm@109: categorically that maybe we couldn't, because humans are very rlm@109: [varied], and also [a few] people can develop particular competences rlm@109: through training. But it's certainly not something I can do. rlm@109:

rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

4.2 AI can be used to test philosophical theories

rlm@109:
rlm@109: rlm@109:

[23:01] But I also would like to say that I am not myself particularly rlm@109: interested in trying to align animal intelligences according to any rlm@109: kind of scale of superiority; I'm just trying to understand what it rlm@109: was that biological evolution produced, and how it works, and I'm rlm@109: interested in AI mainly because I think that when one comes up with rlm@109: theories about how these things work, one needs to have some way of rlm@109: testing the theory. And AI provides ways of implementing and testing rlm@109: theories that were not previously available: Immanuel Kant was trying rlm@109: to come up with theories about how minds work, but he didn't have any rlm@109: kind of a mechanism that he could build to test his theory about the rlm@109: nature of mathematical knowledge, for instance, or how concepts were rlm@109: developed from babyhood onward. Whereas now, if we do develop a rlm@109: theory, we have a criterion of adequacy, namely it should be precise rlm@109: enough and rich enough and detailed to enable a model to be rlm@109: built. And then we can see if it works. rlm@109:

rlm@109:

rlm@109: [24:07] If it works, it doesn't mean we've proved that the theory is rlm@109: correct; it just shows it's a candidate. And if it doesn't work, then rlm@109: it's not a candidate as it stands; it would need to be modified in rlm@109: some way. rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

5 Is abstract general intelligence feasible?

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

5.1 It's misleading to compare the brain and its neurons to a computer made of transistors

rlm@109:
rlm@109: rlm@109:

[24:27] I think there's a lot of optimism based on false clues: rlm@109: the…for example, one of the false clues is to count the number of rlm@109: neurons in the brain, and then talk about the number of transistors rlm@109: you can fit into a computer or something, and then compare them. It rlm@109: might turn out that the study of the way synapses work (which leads rlm@109: some people to say that a typical synapse [] in the human brain has rlm@109: computational power comparable to the Internet a few years ago, rlm@109: because of the number of different molecules that are doing things, rlm@109: the variety of types of things that are being done in those molecular rlm@109: interactions, and the speed at which they happen, if you somehow count rlm@109: up the number of operations per second or something, then you get rlm@109: these comparable figures). rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

5.2 For example, brains may rely heavily on chemical information processing

rlm@109:
rlm@109: rlm@109:

Now even if the details aren't right, there may just be a lot of rlm@109: information processing that…going on in brains at the molecular rlm@109: level, not the neural level. Then, if that's the case, the processing rlm@109: units will be orders of magnitude larger in number than the number of rlm@109: neurons. And it's certainly the case that all the original biological rlm@109: forms of information processing were chemical; there weren't brains rlm@109: around, and still aren't in most microbes. And even when humans grow rlm@109: their brains, the process of starting from a fertilized egg and rlm@109: producing this rich and complex structure is, for much of the time, rlm@109: under the control of chemical computations, chemical information rlm@109: processing—of course combined with physical sorts of materials and rlm@109: energy and so on as well. rlm@109:

rlm@109:

rlm@109: [26:25] So it would seem very strange if all that capability was rlm@109: something thrown away when you've got a brain and all the information rlm@109: processing, the [challenges that were handled in making a brain], rlm@109: … This is handwaving on my part; I'm just saying that we might rlm@109: learn that what brains do is not what we think they do, and that rlm@109: problems of replicating them are not what we think they are, solely in rlm@109: terms of numerical estimate of time scales, the number of components, rlm@109: and so on. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

5.3 Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations

rlm@109:
rlm@109: rlm@109:

[26:56] But apart from that, the other basis of skepticism concerns rlm@109: how well we understand what the problems are. I think there are many rlm@109: people who try to formalize the problems of designing an intelligent rlm@109: system in terms of streams of information thought of as bit streams or rlm@109: collections of bit streams, and they think of as the problems of rlm@109: intelligence as being the construction or detection of patterns in rlm@109: those, and perhaps not just detection of patterns, but detection of rlm@109: patterns that are useable for sending out streams to control motors rlm@109: and so on in order to []. And that way of conceptualizing the problem rlm@109: may lead on the one hand to oversimplification, so that the things rlm@109: that would be achieved, if those goals were achieved, maybe much rlm@109: simpler, in some ways inadequate. Or the replication of human rlm@109: intelligence, or the matching of human intelligence—or for that rlm@109: matter, squirrel intelligence—but in another way, it may also make rlm@109: the problem harder: it may be that some of the kinds of things that rlm@109: biological evolution has achieved can't be done that way. And one of rlm@109: the ways that might turn out to be the case is not because it's not rlm@109: impossible in principle to do some of the information processing on rlm@109: artificial computers-based-on-transistors and other bit-manipulating rlm@109: []—but it may just be that the computational complexity of solving rlm@109: problems, processes, or finding solutions to complex problems, are rlm@109: much greater and therefore you might need a much larger universe than rlm@109: we have available in order to do things. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

5.4 Example: find the shortest path by dangling strings

rlm@109:
rlm@109: rlm@109:

[28:55] Then if the underlying mechanisms were different, the rlm@109: information processing mechanisms, they might be better tailored to rlm@109: particular sorts of computation. There's a [] example, which is rlm@109: finding the shortest route if you've got a collection of roads, and rlm@109: they may be curved roads, and lots of tangled routes from A to B to C, rlm@109: and so on. And if you start at A and you want to get to Z — a place rlm@109: somewhere on that map — the process of finding the shortest route rlm@109: will involve searching through all these different possibilities and rlm@109: rejecting some that are longer than others and so on. But if you make rlm@109: a model of that map out of string, where these strings are all laid rlm@109: out on the maps and so have the lengths of the routes. Then if you rlm@109: hold the two knots in the string – it's a network of string — which rlm@109: correspond to the start point and end point, then pull, then the rlm@109: bits of string that you're left with in a straight line will give you rlm@109: the shortest route, and that process of pulling just gets you the rlm@109: solution very rapidly in a parallel computation, where all the others rlm@109: just hang by the wayside, so to speak. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

5.5 In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.

rlm@109:
rlm@109: rlm@109:

[30:15] Now, I'm not saying brains can build networks of string and rlm@109: pull them or anything like that; that's just an illustration of how if rlm@109: you have the right representation, correctly implemented—or suitably rlm@109: implemented—for a problem, then you can avoid very combinatorially rlm@109: complex searches, which will maybe grow exponentially with the number rlm@109: of components in your map, whereas with this thing, the time it takes rlm@109: won't depend on how many strings you've [got on the map]; you just rlm@109: pull, and it will depend only on the shortest route that exists in rlm@109: there. Even if that shortest route wasn't obvious on the original map. rlm@109:

rlm@109: rlm@109:

rlm@109: [30:59] So that's a rather long-winded way of formulating the rlm@109: conjecture which—of supporting, a roundabout way of supporting the rlm@109: conjecture that there may be something about the way molecules perform rlm@109: computations where they have the combination of continuous change as rlm@109: things move through space and come together and move apart, and rlm@109: whatever — and also snap into states that then persist, so [as you rlm@109: learn from] quantum mechanics, you can have stable molecular rlm@109: structures which are quite hard to separate, and then in catalytic rlm@109: processes you can separate them, or extreme temperatures, or strong rlm@109: forces, but they may nevertheless be able to move very rapidly in some rlm@109: conditions in order to perform computations. rlm@109:

rlm@109:

rlm@109: [31:49] Now there may be things about that kind of structure that rlm@109: enable searching for solutions to certain classes of problems to be rlm@109: done much more efficiently (by brain) than anything we could do with rlm@109: computers. It's just an open question. rlm@109:

rlm@109:

rlm@109: [32:04] So it might turn out that we need new kinds of technology rlm@109: that aren't on the horizon in order to replicate the functions that rlm@109: animal brains perform —or, it might not. I just don't know. I'm not rlm@109: claiming that there's strong evidence for that; I'm just saying that rlm@109: it might turn out that way, partly because I think we know less than rlm@109: many people think we know about what biological evolution achieved. rlm@109:

rlm@109:

rlm@109: [32:28] There are some other possibilities: we may just find out that rlm@109: there are shortcuts no one ever thought of, and it will all happen rlm@109: much more quickly—I have an open mind; I'd be surprised, but it rlm@109: could turn up. There is something that worries me much more than the rlm@109: singularity that most people talk about, which is machines achieving rlm@109: human-level intelligence and perhaps taking over [the] planet or rlm@109: something. There's what I call the singularity of cognitive catch-up … rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

6 A singularity of cognitive catch-up

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

6.1 What if it will take a lifetime to learn enough to make something new?

rlm@109:
rlm@109: rlm@109:

… SCC, singularity of cognitive catch-up, which I think we're close rlm@109: to, or maybe have already reached—I'll explain what I mean by rlm@109: that. One of the products of biological evolution—and this is one of rlm@109: the answers to your earlier questions which I didn't get on to—is rlm@109: that humans have not only the ability to make discoveries that none of rlm@109: their ancestors have ever made, but to shorten the time required for rlm@109: similar achievements to be reached by their offspring and their rlm@109: descendants. So once we, for instance, worked out ways of complex rlm@109: computations, or ways of building houses, or ways of finding our way rlm@109: around, we don't need…our children don't need to work it out for rlm@109: themselves by the same lengthy trial and error procedure; we can help rlm@109: them get there much faster. rlm@109:

rlm@109:

rlm@109: Okay, well, what I've been referring to as the singularity of rlm@109: cognitive catch-up depends on the fact that—fairly obvious, and it's rlm@109: often been commented on—that in case of humans, it's not necessary rlm@109: for each generation to learn what previous generations learned in the same way. And we can speed up learning once something has been rlm@109: learned, [it is able to] be learned by new people. And that has meant rlm@109: that the social processes that support that kind of education of the rlm@109: young can enormously accelerate what would have taken…perhaps rlm@109: thousands [or] millions of years for evolution to produce, can happen in rlm@109: a much shorter time. rlm@109:

rlm@109: rlm@109:

rlm@109: [34:54] But here's the catch: in order for a new advance to happen --- rlm@109: so for something new to be discovered that wasn't there before, like rlm@109: Newtonian mechanics, or the theory of relativity, or Beethoven's music rlm@109: or [style] or whatever — the individuals have to have traversed a rlm@109: significant amount of what their ancestors have learned, even if they rlm@109: do it much faster than their ancestors, to get to the point where they rlm@109: can see the gaps, the possibilities for going further than their rlm@109: ancestors, or their parents or whatever, have done. rlm@109:

rlm@109:

rlm@109: [35:27] Now in the case of knowledge of science, mathematics, rlm@109: philosophy, engineering and so on, there's been a lot of accumulated rlm@109: knowledge. And humans are living a bit longer than they used to, but rlm@109: they're still living for [whatever it is], a hundred years, or for rlm@109: most people, less than that. So you can imagine that there might come rlm@109: a time when in a normal human lifespan, it's not possible for anyone rlm@109: to learn enough to understand the scope and limits of what's already rlm@109: been achieved in order to see the potential for going beyond it and to rlm@109: build on what's already been done to make that…those future steps. rlm@109:

rlm@109:

rlm@109: [36:10] So if we reach that stage, we will have reached the rlm@109: singularity of cognitive catch-up because the process of education rlm@109: that enables individuals to learn faster than their ancestors did is rlm@109: the catching-up process, and it may just be that we at some point rlm@109: reach a point where catching up can only happen within a lifetime of rlm@109: an individual, and after that they're dead and they can't go rlm@109: beyond. And I have some evidence that there's a lot of that around rlm@109: because I see a lot of people coming up with what they think of as rlm@109: new ideas which they've struggled to come up with, but actually they rlm@109: just haven't taken in some of what was…some of what was done [] by rlm@109: other people, in other places before them. And I think that despite rlm@109: the availability of search engines which make it easier for people rlm@109: to get the information—for instance, when I was a student, if I rlm@109: wanted to find out what other people had done in the field, it was a rlm@109: laborious process—going to the library, getting books, and rlm@109: —whereas now, I can often do things in seconds that would have taken rlm@109: hours. So that means that if seconds [are needed] for that kind of rlm@109: work, my lifespan has been extended by a factor of ten or rlm@109: something. So maybe that delays the singularity, but it may not rlm@109: delay it enough. But that's an open question; I don't know. And it may rlm@109: just be that in some areas, this is more of a problem than others. For rlm@109: instance, it may be that in some kinds of engineering, we're handing rlm@109: over more and more of the work to machines anyways and they can go on rlm@109: doing it. So for instance, most of the production of computers now is rlm@109: done by a computer-controlled machine—although some of the design rlm@109: work is done by humans— a lot of detail of the design is done by rlm@109: computers, and they produce the next generation, which then produces rlm@109: the next generation, and so on. rlm@109:

rlm@109:

rlm@109: [37:57] I don't know if humans can go on having major advances, so rlm@109: it'll be kind of sad if we can't. rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

7 Spatial reasoning: a difficult problem

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: [38:15] Okay, well, there are different problems [ ] mathematics, and rlm@109: they have to do with properties. So for instance a lot of mathematics rlm@109: that can be expressed in terms of logical structures or algebraic rlm@109: structures and those are pretty well suited for manipulation and…on rlm@109: computers, and if a problem can be specified using the rlm@109: logical/algebraic notation, and the solution method requires creating rlm@109: something in that sort of notation, then computers are pretty good, rlm@109: and there are lots of mathematical tools around—there are theorem rlm@109: provers and theorem checkers, and all kinds of things, which couldn't rlm@109: have existed fifty, sixty years ago, and they will continue getting rlm@109: better. rlm@109:

rlm@109: rlm@109:

rlm@109: But there was something that I was alluding to earlier when I gave the rlm@109: example of how you can reason about what you will see by changing your rlm@109: position in relation to a door, where what you are doing is using your rlm@109: grasp of spatial structures and how as one spatial relationship rlm@109: changes namely you come closer to the door or move sideways and rlm@109: parallel to the wall or whatever, other spatial relationships change rlm@109: in parallel, so the lines from your eyes through to other parts of rlm@109: the…parts of the room on the other side of the doorway change, rlm@109: spread out more as you go towards the doorway, and as you move rlm@109: sideways, they don't spread out differently, but focus on different rlm@109: parts of the internal … that they access different parts of the rlm@109: … of the room. rlm@109:

rlm@109:

rlm@109: Now, those are examples of ways of thinking about relationships and rlm@109: changing relationships which are not the same as thinking about what rlm@109: happens if I replace this symbol with that symbol, or if I substitute rlm@109: this expression in that expression in a logical formula. And at the rlm@109: moment, I do not believe that there is anything in AI amongst the rlm@109: mathematical reasoning community, the theorem-proving community, that rlm@109: can model the processes that go on when a young child starts learning rlm@109: to do Euclidean geometry and is taught things about—for instance, I rlm@109: can give you a proof that the angles of any triangle add up to a rlm@109: straight line, 180 degrees. rlm@109:

rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

7.1 Example: Spatial proof that the angles of any triangle add up to a half-circle

rlm@109:
rlm@109: rlm@109:

There are standard proofs which involves starting with one triangle, rlm@109: then adding a line parallel to the base one of my former students, rlm@109: Mary Pardoe, came up with which I will demonstrate with this <he holds rlm@109: up a pen> — can you see it? If I have a triangle here that's got rlm@109: three sides, if I put this thing on it, on one side — let's say the rlm@109: bottom—I can rotate it until it lies along the second…another rlm@109: side, and then maybe move it up to the other end ~. Then I can rotate rlm@109: it again, until it lies on the third side, and move it back to the rlm@109: other end. And then I'll rotate it again and it'll eventually end up rlm@109: on the original side, but it will have changed the direction it's rlm@109: pointing in — and it won't have crossed over itself so it will have rlm@109: gone through a half-circle, and that says that the three angles of a rlm@109: triangle add up to the rotations of half a circle, which is a rlm@109: beautiful kind of proof and almost anyone can understand it. Some rlm@109: mathematicians don't like it, because they say it hides some of the rlm@109: assumptions, but nevertheless, as far as I'm concerned, it's an rlm@109: example of a human ability to do reasoning which, once you've rlm@109: understood it, you can see will apply to any triangle — it's got to rlm@109: be a planar triangle — not a triangle on a globe, because then the rlm@109: angles can add up to more than … you can have three right angles rlm@109: if you have an equator…a line on the equator, and a line going up to rlm@109: to the north pole of the earth, and then you have a right angle and rlm@109: then another line going down to the equator, and you have a right rlm@109: angle, right angle, right angle, and they add up to more than a rlm@109: straight line. But that's because the triangle isn't in the plane, rlm@109: it's on a curved surface. In fact, that's one of the rlm@109: differences…definitional differences you can take between planar and rlm@109: curved surfaces: how much the angles of a triangle add up to. But our rlm@109: ability to visualize and notice the generality in that process, and rlm@109: see that you're going to be able to do the same thing using triangles rlm@109: that stretch in all sorts of ways, or if it's a million times as rlm@109: large, or if it's made…you know, written on, on…if it's drawn in rlm@109: different colors or whatever — none of that's going to make any rlm@109: difference to the essence of that process. And that ability to see rlm@109: the commonality in a spatial structure which enables you to draw some rlm@109: conclusions with complete certainty—subject to the possibility that rlm@109: sometimes you make mistakes, but when you make mistakes, you can rlm@109: discover them, as has happened in the history of geometrical theorem rlm@109: proving. Imre Lakatos had a wonderful book called Proofs and Refutations — which I won't try to summarize — but he has rlm@109: examples: mistakes were made; that was because people didn't always rlm@109: realize there were subtle subcases which had slightly different rlm@109: properties, and they didn't take account of that. But once they're rlm@109: noticed, you rectify that. rlm@109:

rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

7.2 Geometric results are fundamentally different than experimental results in chemistry or physics.

rlm@109:
rlm@109: rlm@109:

[43:28] But it's not the same as doing experiments in chemistry and rlm@109: physics, where you can't be sure it'll be the same on [] or at a high rlm@109: temperature, or in a very strong magnetic field — with geometric rlm@109: reasoning, in some sense you've got the full information in front of rlm@109: you; even if you don't always notice an important part of it. So, that rlm@109: kind of reasoning (as far as I know) is not implemented anywhere in a rlm@109: computer. And most people who do research on trying to model rlm@109: mathematical reasoning, don't pay any attention to that, because of rlm@109: … they just don't think about it. They start from somewhere else, rlm@109: maybe because of how they were educated. I was taught Euclidean rlm@109: geometry at school. Were you? rlm@109:

rlm@109:

rlm@109: (Adam ford: Yeah) rlm@109:

rlm@109:

rlm@109: Many people are not now. Instead they're taught set theory, and rlm@109: logic, and arithmetic, and [algebra], and so on. And so they don't use rlm@109: that bit of their brains, without which we wouldn't have built any of rlm@109: the cathedrals, and all sorts of things we now depend on. rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

8 Is near-term artificial general intelligence likely?

rlm@109:
rlm@109: rlm@109: rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

8.1 Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: [44:35] Well, this relates to what's meant by general. And when I rlm@109: first encountered the AGI community, I thought that what they all rlm@109: meant by general intelligence was uniform intelligence --- rlm@109: intelligence based on some common simple (maybe not so simple, but) rlm@109: single powerful mechanism or principle of inference. And there are rlm@109: some people in the community who are trying to produce things like rlm@109: that, often in connection with algorithmic information theory and rlm@109: computability of information, and so on. But there's another sense of rlm@109: general which means that the system of general intelligence can do rlm@109: lots of different things, like perceive things, understand language, rlm@109: move around, make things, and so on — perhaps even enjoy a joke; rlm@109: that's something that's not nearly on the horizon, as far as I rlm@109: know. Enjoying a joke isn't the same as being able to make laughing rlm@109: noises. rlm@109:

rlm@109:

rlm@109: Given, then, that there are these two notions of general rlm@109: intelligence—there's one that looks for one uniform, possibly rlm@109: simple, mechanism or collection of ideas and notations and algorithms, rlm@109: that will deal with any problem that's solvable — and the other rlm@109: that's general in the sense that it can do lots of different things rlm@109: that are combined into an integrated architecture (which raises lots rlm@109: of questions about how you combine these things and make them work rlm@109: together) and we humans, certainly, are of the second kind: we do all rlm@109: sorts of different things, and other animals also seem to be of the rlm@109: second kind, perhaps not as general as humans. Now, it may turn out rlm@109: that in some near future time, who knows—decades, a few rlm@109: decades—you'll be able to get machines that are capable of solving rlm@109: in a time that will depend on the nature of the problem, but any rlm@109: problem that is solvable, and they will be able to do it in some sort rlm@109: of tractable time — of course, there are some problems that are rlm@109: solvable that would require a larger universe and a longer history rlm@109: than the history of the universe, but apart from that constraint, rlm@109: these machines will be able to do anything []. But to be able to do rlm@109: some of the kinds of things that humans can do, like the kinds of rlm@109: geometrical reasoning where you look at the shape and you abstract rlm@109: away from the precise angles and sizes and shapes and so on, and rlm@109: realize there's something general here, as must have happened when our rlm@109: ancestors first made the discoveries that eventually put together in rlm@109: Euclidean geometry. rlm@109:

rlm@109:

rlm@109: It may be that that requires mechanisms of a kind that we don't know rlm@109: anything about at the moment. Maybe brains are using molecules and rlm@109: rearranging molecules in some way that supports that kind of rlm@109: reasoning. I'm not saying they are — I don't know, I just don't see rlm@109: any simple…any obvious way to map that kind of reasoning capability rlm@109: onto what we currently do on computers. There is—and I just rlm@109: mentioned this briefly beforehand—there is a kind of thing that's rlm@109: sometimes thought of as a major step in that direction, namely you can rlm@109: build a machine (or a software system) that can represent some rlm@109: geometrical structure, and then be told about some change that's going rlm@109: to happen to it, and it can predict in great detail what'll rlm@109: happen. And this happens for instance in game engines, where you say rlm@109: we have all these blocks on the table and I'll drop one other block, rlm@109: and then [the thing] uses Newton's laws and properties of rigidity of rlm@109: the parts and the elasticity and also stuff about geometries and space rlm@109: and so on, to give you a very accurate representation of what'll rlm@109: happen when this brick lands on this pile of things, [it'll bounce and rlm@109: go off, and so on]. And you just, with more memory and more CPU power, rlm@109: you can increase the accuracy— but that's totally different than rlm@109: looking at one example, and working out what will happen in a whole rlm@109: range of cases at a higher level of abstraction, whereas the game rlm@109: engine does it in great detail for just this case, with just those rlm@109: precise things, and it won't even know what the generalizations are rlm@109: that it's using that would apply to others []. So, in that sense, [we] rlm@109: may get AGI — artificial general intelligence — pretty soon, but rlm@109: it'll be limited in what it can do. And the other kind of general rlm@109: intelligence which combines all sorts of different things, including rlm@109: human spatial geometrical reasoning, and maybe other things, like the rlm@109: ability to find things funny, and to appreciate artistic features and rlm@109: other things may need forms of pattern-mechanism, and I have an open rlm@109: mind about that. rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109: rlm@109:
rlm@109:

9 Abstract General Intelligence impacts

rlm@109:
rlm@109: rlm@109: rlm@109:

rlm@109: [49:53] Well, as far as the first type's concerned, it could be useful rlm@109: for all kinds of applications — there are people who worry about rlm@109: where there's a system that has that type of intelligence, might in rlm@109: some sense take over control of the planet. Well, humans often do rlm@109: stupid things, and they might do something stupid that would lead to rlm@109: disaster, but I think it's more likely that there would be other rlm@109: things [] lead to disaster— population problems, using up all the rlm@109: resources, destroying ecosystems, and whatever. But certainly it would rlm@109: go on being useful to have these calculating devices. Now, as for the rlm@109: second kind of them, I don't know—if we succeeded at putting rlm@109: together all the parts that we find in humans, we might just make an rlm@109: artificial human, and then we might have some of them as your friends, rlm@109: and some of them we might not like, and some of them might become rlm@109: teachers or whatever, composers — but that raises a question: could rlm@109: they, in some sense, be superior to us, in their learning rlm@109: capabilities, their understanding of human nature, or maybe their rlm@109: wickedness or whatever — these are all issues in which I expect the rlm@109: best science fiction writers would give better answers than anything I rlm@109: could do, but I did once fantasize when I [back] in 1978, that perhaps rlm@109: if we achieved that kind of thing, that they would be wise, and gentle rlm@109: and kind, and realize that humans are an inferior species that, you rlm@109: know, have some good features, so they'd keep us in some kind of rlm@109: secluded…restrictive kind of environment, keep us away from rlm@109: dangerous weapons, and so on. And find ways of cohabitating with rlm@109: us. But that's just fantasy. rlm@109:

rlm@109:

rlm@109: Adam Ford: Awesome. Yeah, there's an interesting story With Folded Hands where [the computers] want to take care of us and want to rlm@109: reduce suffering and end up lobotomizing everybody [but] keeping them rlm@109: alive so as to reduce the suffering. rlm@109:

rlm@109:

rlm@109: Aaron Sloman: Not all that different from Brave New World, where it rlm@109: was done with drugs and so on, but different humans are given rlm@109: different roles in that system, yeah. rlm@109:

rlm@109:

rlm@109: There's also The Time Machine, H.G. Wells, where the … in the rlm@109: distant future, humans have split in two: the Eloi, I think they were rlm@109: called, they lived underground, they were the [] ones, and then—no, rlm@109: the Morlocks lived underground; Eloi lived on the planet; they were rlm@109: pleasant and pretty but not very bright, and so on, and they were fed rlm@109: on by … rlm@109:

rlm@109:

rlm@109: Adam Ford: [] in the future. rlm@109:

rlm@109:

rlm@109: Aaron Sloman: As I was saying, if you ask science fiction writers, rlm@109: you'll probably come up with a wide variety of interesting answers. rlm@109:

rlm@109:

rlm@109: Adam Ford: I certainly have; I've spoken to [] of Birmingham, and rlm@109: Sean Williams, … who else? rlm@109:

rlm@109:

rlm@109: Aaron Sloman: Did you ever read a story by E.M. Forrester called The Machine Stops — very short story, it's on the Internet somewhere rlm@109: — it's about a time when people sitting … and this was written in rlm@109: about [1914 ] so it's about…over a hundred years ago … people are rlm@109: in their rooms, they sit in front of screens, and they type things, rlm@109: and they communicate with one another that way, and they don't meet; rlm@109: they have debates, and they give lectures to their audiences that way, rlm@109: and then there's a woman whose son says “I'd like to see rlm@109: you” and she says “What's the point? You've got me at rlm@109: this point ” but he wants to come and talk to her — I won't rlm@109: tell you how it ends, but. rlm@109:

rlm@109:

rlm@109: Adam Ford: Reminds me of the Internet. rlm@109:

rlm@109:

rlm@109: Aaron Sloman: Well, yes; he invented … it was just extraordinary rlm@109: that he was able to do that, before most of the components that we rlm@109: need for it existed. rlm@109:

rlm@109:

rlm@109: Adam Ford: [Another person who did that] was Vernor Vinge [] True Names. rlm@109:

rlm@109:

rlm@109: Aaron Sloman: When was that written? rlm@109:

rlm@109:

rlm@109: Adam Ford: The seventies. rlm@109:

rlm@109:

rlm@109: Aaron Sloman: Okay, well a lot of the technology was already around rlm@109: then. The original bits of internet were working, in about 1973, I was rlm@109: sitting … 1974, I was sitting at Sussex University trying to rlm@109: use…learn LOGO, the programming language, to decide whether it was rlm@109: going to be useful for teaching AI, and I was sitting [] paper rlm@109: teletype, there was paper coming out, transmitting ten characters a rlm@109: second from Sussex to UCL computer lab by telegraph cable, from there rlm@109: to somewhere in Norway via another cable, from there by satellite to rlm@109: California to a computer Xerox [] research center where they had rlm@109: implemented a computer with a LOGO system on it, with someone I had rlm@109: met previously in Edinburgh, Danny Bobrow, and he allowed me to have rlm@109: access to this sytem. So there I was typing. And furthermore, it was rlm@109: duplex typing, so every character I typed didn't show up on my rlm@109: terminal until it had gone all the way there and echoed back, so I rlm@109: would type, and the characters would come back four seconds later. rlm@109:

rlm@109:

rlm@109: [55:26] But that was the Internet, and I think Vernor Vinge was rlm@109: writing after that kind of thing had already started, but I don't rlm@109: know. Anyway. rlm@109:

rlm@109:

rlm@109: [55:41] Another…I mentioned H.G. Wells, The Time Machine. I rlm@109: recently discovered, because David Lodge had written a sort of rlm@109: semi-novel about him, that he had invented Wikipedia, in advance— he rlm@109: had this notion of an encyclopedia that was free to everybody, and rlm@109: everybody could contribute and [collaborate on it]. So, go to the rlm@109: science fiction writers to find out the future — well, a range of rlm@109: possible futures. rlm@109:

rlm@109:

rlm@109: Adam Ford: Well the thing is with science fiction writers, they have rlm@109: to maintain some sort of interest for their readers, after all the rlm@109: science fiction which reaches us is the stuff that publishers want to rlm@109: sell, and so there's a little bit of a … a bias towards making a rlm@109: plot device there, and so the dramatic sort of appeals to our rlm@109: amygdala, our lizard brain; we'll sort of stay there obviously to some rlm@109: extent. But I think that they do come up with sort of amazing ideas; I rlm@109: think it's worth trying to make these predictions; I think that we rlm@109: should more time on strategic forecasting, I mean take that seriously. rlm@109:

rlm@109:

rlm@109: Aaron Sloman: Well, I'm happy to leave that to others; I just want to rlm@109: try to understand these problems that bother me about how things rlm@109: work. And it may be that some would say that's irresponsible if I rlm@109: don't think about what the implications will be. Well, understanding rlm@109: how humans work might enable us to make [] humans — I suspect it rlm@109: wont happen in this century; I think it's going to be too difficult. rlm@109:

rlm@109:
rlm@109:
rlm@109: rlm@109:
rlm@109:

Date: 2013-10-04 18:49:53 UTC

rlm@109:

Author: Dylan Holmes

rlm@109:

Org version 7.7 with Emacs version 23

rlm@109: Validate XHTML 1.0 rlm@109: rlm@109:
rlm@109: rlm@109: