changeset 57:a72ac82bb785

add dylan's sloman transcript.
author Robert McIntyre <rlm@mit.edu>
date Tue, 13 Aug 2013 00:47:01 -0400
parents 05e666949a4f
children 82cfd2b29db6
files css/sloman.css org/sloman.org
diffstat 2 files changed, 1047 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/css/sloman.css	Tue Aug 13 00:47:01 2013 -0400
     1.3 @@ -0,0 +1,111 @@
     1.4 +/*** RESETS ***/
     1.5 +html,body{margin:0;padding:0;color:#4f4030;}
     1.6 +h1,h2,h3,h4,h5,h6 {font-size:inherit;line-height:inherit;margin:0;padding:0;font-weight:inherit;}
     1.7 +a{color:inherit;}
     1.8 +
     1.9 +/*** CORE SETTINGS ***/
    1.10 +
    1.11 +body {
    1.12 +    font-size:16px;
    1.13 +    line-height:1.25em;
    1.14 +    padding:1.5em;
    1.15 +}
    1.16 +
    1.17 +body {
    1.18 +    /*width:36em;
    1.19 +    margin:0px auto;*/
    1.20 +    padding:2.5em 5em;
    1.21 +    background:#fff;
    1.22 +}
    1.23 +html {
    1.24 +
    1.25 +}
    1.26 +
    1.27 +.outline-3 a {
    1.28 +color:#369;
    1.29 +}
    1.30 +/*** TITLES AND OTHER HEADERS ***/
    1.31 +h1.title {
    1.32 +    text-align:left;
    1.33 +    font-size:1.5em;
    1.34 +    line-height:1.667em;
    1.35 +    line-height:1em;
    1.36 +    font-weight:bold;
    1.37 +}
    1.38 +
    1.39 +
    1.40 +h2 {
    1.41 +    font-weight:bold;
    1.42 +    margin-top:2.5em;
    1.43 +}
    1.44 +h3 {
    1.45 +    text-align:center;
    1.46 +    font-family:Cabin,Helvetica,Arial,sans-serif;
    1.47 +    line-height:1em;
    1.48 +    padding-top:1.5em;
    1.49 +    padding-bottom:1.5em;
    1.50 +    border-top:1px dotted #ccc;
    1.51 +    color:#7f674;
    1.52 +}
    1.53 +h4 {
    1.54 +    font-weight:bold;
    1.55 +    font-size:1.25em;
    1.56 +    line-height:1em;
    1.57 +    margin-top:2em;
    1.58 +}
    1.59 +
    1.60 +.tag {
    1.61 +    background:inherit;
    1.62 +    color:#369;
    1.63 +    color:#4d657f;
    1.64 +    font-family:Cabin,Helvetica,Arial,sans-serif;
    1.65 +    font-size:14px;
    1.66 +}
    1.67 +
    1.68 +/*** TABLE OF CONTENTS ***/
    1.69 +#text-table-of-contents {
    1.70 +    margin-bottom:4em;
    1.71 +}
    1.72 +#text-table-of-contents ul {
    1.73 +    list-style-type:none;
    1.74 +    padding:0;
    1.75 +    font-weight:bold;
    1.76 +    font-family:Cabin,Helvetica,Arial,sans-serif;
    1.77 +}
    1.78 +
    1.79 +#text-table-of-contents > ul li {
    1.80 +    line-height:1.875em;
    1.81 +    line-height:2.5em;
    1.82 +    color:#7f674d;
    1.83 +} 
    1.84 +
    1.85 +
    1.86 +#text-table-of-contents ul ul li {
    1.87 +    line-height:1.25em;
    1.88 +    line-height:1.25em;
    1.89 +    padding-left:1.4em;
    1.90 +    font-weight:normal;
    1.91 +    font-family:Georgia,Times,Palatino,serif;
    1.92 +    color:#4f4030;
    1.93 +}
    1.94 +#text-table-of-contents a {
    1.95 +    text-decoration:none;
    1.96 +}
    1.97 +#text-table-of-contents a:hover {
    1.98 +    text-decoration:underline;
    1.99 +}
   1.100 +
   1.101 +
   1.102 +
   1.103 +/*** AUXUILLARY STUFF (quotes, verses) **/
   1.104 +
   1.105 +.verse {
   1.106 +font-size:1em;
   1.107 +line-height:1.125em;
   1.108 +color:#666;
   1.109 +color:#682;
   1.110 +color:inherit;
   1.111 +border-left:0.25em solid #eee;
   1.112 +padding-left:0.75em;
   1.113 +
   1.114 +}
     2.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     2.2 +++ b/org/sloman.org	Tue Aug 13 00:47:01 2013 -0400
     2.3 @@ -0,0 +1,936 @@
     2.4 +#+TITLE:Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview
     2.5 +#+AUTHOR:Dylan Holmes
     2.6 +#+EMAIL:
     2.7 +#+STYLE: <link rel="stylesheet" type="text/css" href="../css/sloman.css" /> 
     2.8 +
     2.9 +
    2.10 +#+BEGIN_QUOTE
    2.11 +
    2.12 +
    2.13 +
    2.14 +
    2.15 +
    2.16 +
    2.17 +
    2.18 +
    2.19 +
    2.20 +
    2.21 +
    2.22 +
    2.23 +
    2.24 +
    2.25 +
    2.26 +*Editor's note:* This is a working draft transcript which I made of
    2.27 +[[http://www.youtube.com/watch?feature=player_detailpage&v=iuH8dC7Snno][this nice interview]] of Aaron Sloman. Having just finished one
    2.28 +iteration of transcription, I still need to go in and clean up the
    2.29 +formatting and fix the parts that I misheard, so you can expect the
    2.30 +text to improve significantly in the near future.
    2.31 +
    2.32 +To the extent that this is my work, you have my permission to make
    2.33 +copies of this transcript for your own purposes. Also, feel free to
    2.34 +e-mail me with comments or corrections.
    2.35 +
    2.36 +You can send mail to =transcript@aurellem.org=.
    2.37 +
    2.38 +Cheers,
    2.39 +
    2.40 +---Dylan
    2.41 +#+END_QUOTE
    2.42 +
    2.43 +
    2.44 +
    2.45 +* Introduction
    2.46 +
    2.47 +** Aaron Sloman evolves into a philosopher of AI
    2.48 +[0:09] My name is Aaron Sloman. My first degree many years ago in
    2.49 +Capetown University was in Physics and Mathematics, and I intended to
    2.50 +go and be a mathematician. I came to Oxford and encountered
    2.51 +philosophers --- I had started reading philosophy and discussing
    2.52 +philosophy before then, and then I found that there were philosophers
    2.53 +who said things about mathematics that I thought were wrong, so
    2.54 +gradually got more and more involved in [philosophy] discussions and
    2.55 +switched to doing philosophy DPhil. Then I became a philosophy
    2.56 +lecturer and about six years later, I was introduced to artificial
    2.57 +intelligence when I was a lecturer at Sussex University in philosophy
    2.58 +and I very soon became convinced that the best way to make progress in
    2.59 +both areas of philosophy (including philosophy of mathematics which I
    2.60 +felt i hadn't dealt with adequately in my DPhil) about the philosophy
    2.61 +of mathematics, philosophy of mind, philsophy of language and all
    2.62 +those things---the best way was to try to design and test working
    2.63 +fragments of mind and maybe eventually put them all together but
    2.64 +initially just working fragments that would do various things.
    2.65 +
    2.66 +[1:12] And I learned to program and ~ with various other people
    2.67 +including ~Margaret Boden whom you've interviewed, developed---helped
    2.68 +develop an undergraduate degree in AI and other things and also began
    2.69 +to do research in AI and so on which I thought of as doing philosophy,
    2.70 +primarily.
    2.71 +
    2.72 +[1:29] And then I later moved to the University of Birmingham and I
    2.73 +was there --- I came in 1991 --- and I've been retired for a while but
    2.74 +I'm not interested in golf or gardening so I just go on doing full
    2.75 +time research and my department is happy to keep me on without paying
    2.76 +me and provide space and resources and I come, meeting bright people
    2.77 +at conferences and try to learn and make progress if I can.
    2.78 +
    2.79 +** AI is hard, in part because there are tempting non-problems.
    2.80 +
    2.81 +One of the things I learnt and understood more and more over the many
    2.82 +years --- forty years or so since I first encountered AI --- is how
    2.83 +hard the problems are, and in part that's because it's very often
    2.84 +tempting to /think/ the problem is something different from what it
    2.85 +actually is, and then people design solutions to the non-problems, and
    2.86 +I think of most of my work now as just helping to clarify what the
    2.87 +problems are: what is it that we're trying to explain --- and maybe
    2.88 +this is leading into what you wanted to talk about:
    2.89 +
    2.90 +I now think that one of the ways of getting a deep understanding of
    2.91 +that is to find out what were the problems that biological evolution
    2.92 +solved, because we are a product of /many/ solutions to /many/
    2.93 +problems, and if we just try to go in and work out what the whole
    2.94 +system is doing, we may get it all wrong, or badly wrong.
    2.95 +
    2.96 +
    2.97 +* What problems of intelligence did evolution solve?
    2.98 +
    2.99 +** Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.
   2.100 +
   2.101 +[2:57] Well, first I would challenge that we are the dominant
   2.102 +species. I know it looks like that but actually if you count biomass,
   2.103 +if you count number of species, if you count number of individuals,
   2.104 +the dominant species are microbes --- maybe not one of them but anyway
   2.105 +they're the ones who dominate in that sense, and furthermore we are
   2.106 +mostly --- we are largely composed of microbes, without which we
   2.107 +wouldn't survive.
   2.108 +
   2.109 +
   2.110 +# ** Many nonlinguistic competences require sophisticated internal representations
   2.111 +[3:27] But there are things that make humans (you could say) best at
   2.112 +those things, or worst at those things, but it's a combination.  And I
   2.113 +think it was a collection of developments of which there isn't any
   2.114 +single one. [] there might be, some people say, human language which
   2.115 +changed everything. By our human language, they mean human
   2.116 +communication in words, but I think that was a later development from
   2.117 +what must have started as the use of /internal/ forms of
   2.118 +representation --- which are there in nest-building birds, in
   2.119 +pre-verbal children, in hunting mammals --- because you can't take in
   2.120 +information about a complex structured environment in which things can
   2.121 +change and you may have to be able to work out what's possible and
   2.122 +what isn't possible, without having some way of representing the
   2.123 +components of the environment, their relationships, the kinds of
   2.124 +things they can and can't do, the kinds of things you might or might
   2.125 +not be able to do --- and /that/ kind of capability needs internal
   2.126 +languages, and I and colleagues [at Birmingham] have been referring to
   2.127 +them as generalized languages because some people object to
   2.128 +referring...to using language to refer to something that isn't used
   2.129 +for communication. But from that viewpoint, not only humans but many
   2.130 +other animals developed abilities to do things to their environment to
   2.131 +make them more friendly to themselves, which depended on being able to
   2.132 +represent possible futures, possible actions, and work out what's the
   2.133 +best thing to do.
   2.134 +
   2.135 +[5:13] And nest-building in corvids for instance---crows, magpies,
   2.136 + [hawks], and so on --- are way beyond what current robots can do, and
   2.137 + in fact I think most humans would be challenged if they had to go and
   2.138 + find a collection of twigs, one at a time, maybe bring them with just
   2.139 + one hand --- or with your mouth --- and assemble them into a
   2.140 + structure that, you know, is shaped like a nest, and is fairly rigid,
   2.141 + and you could trust your eggs in them when wind blows. But they're
   2.142 + doing it, and so ... they're not our evolutionary ancestors, but
   2.143 + they're an indication --- and that example is an indication --- of
   2.144 + what must have evolved in order to provide control over the
   2.145 + environment in /that/ species.
   2.146 +
   2.147 +** Speculation about how communication might have evolved from internal lanagues.
   2.148 +[5:56] And I think hunting mammals, fruit-picking mammals, mammals
   2.149 +that can rearrange parts of the environment, provide shelters, needed
   2.150 +to have .... also needed to have ways of representing possible
   2.151 +futures, not just what's there in the environment. I think at a later
   2.152 +stage, that developed into a form of communication, or rather the
   2.153 +/internal/ forms of representation became usable as a basis for
   2.154 +providing [context] to be communicated. And that happened, I think,
   2.155 +initially through performing actions that expressed intentions, and
   2.156 +probably led to situtations where an action (for instance, moving some
   2.157 +large object) was performed more easily, or more successfully, or more
   2.158 +accurately if it was done collaboratively. So someone who had worked
   2.159 +out what to do might start doing it, and then a conspecific might be
   2.160 +able to work out what the intention is, because that person has the
   2.161 +/same/ forms of representation and can build theories about what's
   2.162 +going on, and might then be able to help.
   2.163 +
   2.164 +[7:11] You can imagine that if that started happening more (a lot of
   2.165 +collaboration based on inferred intentions and plans) then sometimes
   2.166 +the inferences might be obscure and difficult, so the /actions/ might
   2.167 +be enhanced to provide signals as to what the intention is, and what
   2.168 +the best way is to help, and so on.
   2.169 +
   2.170 +[7:35] So, this is all handwaving and wild speculation, but I think
   2.171 +it's consistent with a large collection of facts which one can look at
   2.172 +--- and find if one looks for them, but one won't know if [some]one
   2.173 +doesn't look for them --- about the way children, for instance, who
   2.174 +can't yet talk, communicate, and the things they'll do, like going to
   2.175 +the mother and turning the face to point in the direction where the
   2.176 +child wants it to look and so on; that's an extreme version of action
   2.177 +indicating intention.
   2.178 +
   2.179 +[8:03] Anyway. That's a very long roundabout answer to one conjecture
   2.180 +that the use of communicative language is what gave humans their
   2.181 +unique power to create and destroy and whatever, and I'm saying that
   2.182 +if by that you mean /communicative/ language, then I'm saying there
   2.183 +was something before that which was /non/-communicative language, and I
   2.184 +suspect that noncommunicative language continues to play a deep role
   2.185 +in /all/ human perception ---in mathematical and scientific reasoning, in
   2.186 +problem solving --- and we don't understand very much about it.
   2.187 +
   2.188 +[8:48]
   2.189 +I'm sure there's a lot more to be said about the development of
   2.190 +different kinds of senses, the development of brain structures and
   2.191 +mechanisms is above all that, but perhaps I've droned on long enough
   2.192 +on that question.
   2.193 +
   2.194 +
   2.195 +* How do language and internal states relate to AI?
   2.196 +
   2.197 +[9:09] Well, I think most of the human and animal capabilities that
   2.198 +I've been referring to are not yet to be found in current robots or
   2.199 +[computing] systems, and I think there are two reasons for that: one
   2.200 +is that it's intrinsically very difficult; I think that in particular
   2.201 +it may turn out that the forms of information processing that one can
   2.202 +implement on digital computers as we currently know them may not be as
   2.203 +well suited to performing some of these tasks as other kinds of
   2.204 +computing about which we don't know so much --- for example, I think
   2.205 +there may be important special features about /chemical/ computers
   2.206 +which we might [talk about in a little bit? find out about]. 
   2.207 +
   2.208 +** In AI, false assumptions can lead investigators astray.
   2.209 +[9:57] So, one of the problems then is that the tasks are hard ... but
   2.210 +there's a deeper problem as to why AI hasn't made a great deal of
   2.211 +progress on these problems that I'm talking about, and that is that
   2.212 +most AI researchers assume things---and this is not just AI
   2.213 +researchers, but [also] philsophers, and psychologists, and people
   2.214 +studying animal behavior---make assumptions about what it is that
   2.215 +animals or humans do, for instance make assumptions about what vision
   2.216 +is for, or assumptions about what motivation is and how motivation
   2.217 +works, or assumptions about how learning works, and then they try ---
   2.218 +the AI people try --- to model [or] build systems that perform those
   2.219 +assumed functions. So if you get the /functions/ wrong, then even if
   2.220 +you implement some of the functions that you're trying to implement,
   2.221 +they won't necessarily perform the tasks that the initial objective
   2.222 +was to imitate, for instance the tasks that humans, and nest-building
   2.223 +birds, and monkeys and so on can perform. 
   2.224 +
   2.225 +** Example: Vision is not just about finding surfaces, but about finding affordances.
   2.226 +[11:09] I'll give you a simple example --- well, maybe not so simple,
   2.227 +but --- It's often assumed that the function of vision in humans (and
   2.228 +in other animals with good eyesight and so on) is to take in optical
   2.229 +information that hits the retina, and form into the (maybe changing
   2.230 +--- or, really, in our case definitely changing) patterns of
   2.231 +illumination where there are sensory receptors that detect those
   2.232 +patterns, and then somehow from that information (plus maybe other
   2.233 +information gained from head movement or from comparisons between two
   2.234 +eyes) to work out what there was in the environment that produced
   2.235 +those patterns, and that is often taken to mean \ldquo{}where were the
   2.236 +surfaces off which the light bounced before it came to me\rdquo{}. So
   2.237 +you essentially think of the task of the visual system as being to
   2.238 +reverse the image formation process: so the 3D structure's there, the
   2.239 +lens causes the image to form in the retina, and then the brain goes
   2.240 +back to a model of that 3D structure there. That's a very plausible
   2.241 +theory about vision, and it may be that that's a /subset/ of what
   2.242 +human vision does, but I think James Gibson pointed out that that kind
   2.243 +of thing is not necessarily going to be very useful for an organism,
   2.244 +and it's very unlikely that that's the main function of perception in
   2.245 +general, namely to produce some physical description of what's out
   2.246 +there.
   2.247 +
   2.248 +[12:37] What does an animal /need/? It needs to know what it can do,
   2.249 +what it can't do, what the consequences of its actions will be
   2.250 +.... so, he introduced the word /affordance/, so from his point of
   2.251 +view, the function of vision, perception, are to inform the organism
   2.252 +of what the /affordances/ are for action, where that would mean what
   2.253 +the animal, /given/ its morphology (what it can do with its mouth, its
   2.254 +limbs, and so on, and the ways it can move) what it can do, what its
   2.255 +needs are, what the obstacles are, and how the environment supports or
   2.256 +obstructs those possible actions.
   2.257 +
   2.258 +[13:15] And that's a very different collection of information
   2.259 +structures that you need from, say, \ldquo{}where are all the
   2.260 +surfaces?\rdquo{}: if you've got all the surfaces, /deriving/ the
   2.261 +affordances would still be a major task. So, if you think of the
   2.262 +perceptual system as primarily (for biological organisms) being
   2.263 +devices that provide information about affordances and so on, then the
   2.264 +tasks look very different. And most of the people working, doing
   2.265 +research on computer vision in robots, I think haven't taken all that
   2.266 +on board, so they're trying to get machines to do things which, even
   2.267 +if they were successful, would not make the robots very intelligent
   2.268 +(and in fact, even the ones they're trying to do are not really easy
   2.269 +to do, and they don't succeed very well--- although, there's progress;
   2.270 +I shouldn't disparage it too much.)
   2.271 +
   2.272 +** Online and offline intelligence
   2.273 +
   2.274 +[14:10] It gets more complex as animals get more sophisticated. So, I
   2.275 +like to make a distinction between online intelligence and offline
   2.276 +intelligence. So, for example, if I want to pick something up --- like
   2.277 +this leaf <he plucks a leaf from the table> --- I was able to select
   2.278 +it from all the others in there, and while moving my hand towards it,
   2.279 +I was able to guide its trajectory, making sure it was going roughly
   2.280 +in the right direction --- as opposed to going out there, which
   2.281 +wouldn't have been able to pick it up --- and these two fingers ended
   2.282 +up with a portion of the leaf between them, so that I was able to tell
   2.283 +when I'm ready to do that <he clamps the leaf between two fingers>
   2.284 +and at that point, I clamped my fingers and then I could pick up the
   2.285 +leaf. 
   2.286 +
   2.287 +[14:54] Whereas, --- and that's an example of online intelligence:
   2.288 +during the performance of an action (both from the stage where it's
   2.289 +initiated, and during the intermediate stages, and where it's
   2.290 +completed) I'm taking in information relevant to controlling all those
   2.291 +stages, and that relevant information keeps changing. That means I
   2.292 +need stores of transient information which gets discarded almost
   2.293 +immediately and replaced or something. That's online intelligence. And
   2.294 +there are many forms; that's just one example, and Gibson discussed
   2.295 +quite a lot of examples which I won't try to replicate now.
   2.296 +
   2.297 +[15:30] But in offline intelligence, you're not necessarily actually
   2.298 +/performing/ the actions when you're using your intelligence; you're
   2.299 +thinking about /possible/ actions. So, for instance, I could think
   2.300 +about how fast or by what route I would get back to the lecture room
   2.301 +if I wanted to [get to the next talk] or something. And I know where
   2.302 +the door is, roughly speaking, and I know roughly which route I would
   2.303 +take, when I go out, I should go to the left or to the right, because
   2.304 +I've stored information about where the spaces are, where the
   2.305 +buildings are, where the door was that we came out --- but in using
   2.306 +that information to think about that route, I'm not actually
   2.307 +performing the action. I'm not even /simulating/ it in detail: the
   2.308 +precise details of direction and speed and when to clamp my fingers,
   2.309 +or when to contract my leg muscles when walking, are all irrelevant to
   2.310 +thinking about a good route, or thinking about the potential things
   2.311 +that might happen on the way. Or what would be a good place to meet
   2.312 +someone who I think [for an acquaintance in particular] --- [barber]
   2.313 +or something --- I don't necessarily have to work out exactly /where/
   2.314 +the person's going to stand, or from what angle I would recognize
   2.315 +them, and so on.
   2.316 +
   2.317 +[16:46] So, offline intelligence --- which I think became not just a
   2.318 +human competence; I think there are other animals that have aspects of
   2.319 +it: Squirrels are very impressive as you watch them. Gray squirrels at
   2.320 +any rate, as you watch them defeating squirrel-proof birdfeeders, seem
   2.321 +to have a lot of that [offline intelligence], as well as the online
   2.322 +intelligence when they eventually perform the action they've worked
   2.323 +out [] that will get them to the nuts. 
   2.324 +
   2.325 +[17:16] And I think that what happened during our evolution is that
   2.326 +mechanisms for acquiring and processing and storing and manipulating
   2.327 +information that is more and more remote from the performance of
   2.328 +actions developed. An example is taking in information about where
   2.329 +locations are that you might need to go to infrequently: There's a
   2.330 +store of a particular type of material that's good for building on
   2.331 +roofs of houses or something out around there in some
   2.332 +direction. There's a good place to get water somewhere in another
   2.333 +direction. There are people that you'd like to go and visit in
   2.334 +another place, and so on. 
   2.335 +
   2.336 +[17:59] So taking in information about an extended environment and
   2.337 +building it into a structure that you can make use of for different
   2.338 +purposes is another example of offline intelligence. And when we do
   2.339 +that, we sometimes use only our brains, but in modern times, we also
   2.340 +learned how to make maps on paper and walls and so on. And it's not
   2.341 +clear whether the stuff inside our heads has the same structures as
   2.342 +the maps we make on paper: the maps on paper have a different
   2.343 +function; they may be used to communicate with others, or meant for
   2.344 +/looking/ at, whereas the stuff in your head you don't /look/ at; you
   2.345 +use it in some other way.
   2.346 +
   2.347 +[18:46] So, what I'm getting at is that there's a great deal of human
   2.348 +intelligence (and animal intelligence) which is involved in what's
   2.349 +possible in the future, what exists in distant places, what might have
   2.350 +happened in the past (sometimes you need to know why something is as
   2.351 +it is, because that might be relevant to what you should or shouldn't
   2.352 +do in the future, and so on), and I think there was something about
   2.353 +human evolution that extended that offline intelligence way beyond
   2.354 +that of animals. And I don't think it was /just/ human language, (but
   2.355 +human language had something to do with it) but I think there was
   2.356 +something else that came earlier than language which involves the
   2.357 +ability to use your offline intelligence to discover something that
   2.358 +has a rich mathematical structure. 
   2.359 +
   2.360 +** Example: Even toddlers use sophisticated geometric knowledge
   2.361 +#+<<example-gap>>
   2.362 +[19:44] I'll give you a simple example: if you look through a gap, you
   2.363 +can see something that's on the other side of the gap. Now, you
   2.364 +/might/ see what you want to see, or you might see only part of it. If
   2.365 +you want to see more of it, which way would you move? Well, you could
   2.366 +either move /sideways/, and see through the gap---and see it roughly
   2.367 +the same amount but a different part of it [if it's a ????], or you
   2.368 +could move /towards/ the gap and then your view will widen as you
   2.369 +approach the gap. Now, there's a bit of mathematics in there, insofar
   2.370 +as you are implicitly assuming that information travels in straight
   2.371 +lines, and as you go closer to a gap, the straight lines that you can
   2.372 +draw from where you are through the gap, widen as you approach that
   2.373 +gap. Now, there's a kind of theorem of Euclidean geometry in there
   2.374 +which I'm not going to try to state very precisely (and as far as I
   2.375 +know, wasn't stated explicitly in Euclidean geometry) but it's
   2.376 +something every toddler--- human toddler---learns. (Maybe other
   2.377 +animals also know it, I don't know.) But there are many more things,
   2.378 +actions to perform, to get you more information about things, actions
   2.379 +to perform to conceal information from other people, actions that will
   2.380 +enable you to operate, to act on a rigid object in one place in order
   2.381 +to produce an effect on another place. So, there's a lot of stuff that
   2.382 +involves lines and rotations and angles and speeds and so on that I
   2.383 +think humans (maybe, to a lesser extent, other animals) develop the
   2.384 +ability to think about in a generic way. That means that you could
   2.385 +take out the generalizations from the particular contexts and then
   2.386 +re-use them in a new contexts in ways that I think are not yet
   2.387 +represented at all in AI and in theories of human learning in any []
   2.388 +way --- although some people are trying to study learning of mathematics.
   2.389 +
   2.390 +* Animal intelligence
   2.391 +
   2.392 +** The priority is /cataloguing/ what competences have evolved, not ranking them.
   2.393 +[22:03] I wasn't going to challenge the claim that humans can do more
   2.394 +sophisticated forms of [tracking], just to mention that there are some
   2.395 +things that other animals can do which are in some ways comparable,
   2.396 +and some ways superior to [things] that humans can do. In particular,
   2.397 +there are species of birds and also, I think, some rodents ---
   2.398 +squirrels, or something --- I don't know enough about the variety ---
   2.399 +that can hide nuts and remember where they've hidden them, and go back
   2.400 +to them. And there have been tests which show that some birds are able
   2.401 +to hide tens --- you know, [eighteen] or something nuts --- and to
   2.402 +remember which ones have been taken, which ones haven't, and so
   2.403 +on. And I suspect most humans can't do that. I wouldn't want to say
   2.404 +categorically that maybe we couldn't, because humans are very
   2.405 +[varied], and also [a few] people can develop particular competences
   2.406 +through training. But it's certainly not something I can do.
   2.407 +
   2.408 +
   2.409 +** AI can be used to test philosophical theories
   2.410 +[23:01] But I also would like to say that I am not myself particularly
   2.411 +interested in trying to align animal intelligences according to any
   2.412 +kind of scale of superiority; I'm just trying to understand what it
   2.413 +was that biological evolution produced, and how it works, and I'm
   2.414 +interested in AI /mainly/ because I think that when one comes up with
   2.415 +theories about how these things work, one needs to have some way of
   2.416 +testing the theory. And AI provides ways of implementing and testing
   2.417 +theories that were not previously available: Immanuel Kant was trying
   2.418 +to come up with theories about how minds work, but he didn't have any
   2.419 +kind of a mechanism that he could build to test his theory about the
   2.420 +nature of mathematical knowledge, for instance, or how concepts were
   2.421 +developed from babyhood onward. Whereas now, if we do develop a
   2.422 +theory, we have a criterion of adequacy, namely it should be precise
   2.423 +enough and rich enough and detailed to enable a model to be
   2.424 +built. And then we can see if it works. 
   2.425 +
   2.426 +[24:07] If it works, it doesn't mean we've proved that the theory is
   2.427 +correct; it just shows it's a candidate. And if it doesn't work, then
   2.428 +it's not a candidate as it stands; it would need to be modified in
   2.429 +some way.
   2.430 +
   2.431 +* Is abstract general intelligence feasible?
   2.432 +
   2.433 +** It's misleading to compare the brain and its neurons to a computer made of transistors
   2.434 +[24:27] I think there's a lot of optimism based on false clues:
   2.435 +the...for example, one of the false clues is to count the number of
   2.436 +neurons in the brain, and then talk about the number of transistors
   2.437 +you can fit into a computer or something, and then compare them. It
   2.438 +might turn out that the study of the way synapses work (which leads
   2.439 +some people to say that a typical synapse [] in the human brain has
   2.440 +computational power comparable to the Internet a few years ago,
   2.441 +because of the number of different molecules that are doing things,
   2.442 +the variety of types of things that are being done in those molecular
   2.443 +interactions, and the speed at which they happen, if you somehow count
   2.444 +up the number of operations per second or something, then you get
   2.445 +these comparable figures).
   2.446 +
   2.447 +** For example, brains may rely heavily on chemical information processing
   2.448 +Now even if the details aren't right, there may just be a lot of
   2.449 +information processing that...going on in brains at the /molecular/
   2.450 +level, not the neural level. Then, if that's the case, the processing
   2.451 +units will be orders of magnitude larger in number than the number of
   2.452 +neurons. And it's certainly the case that all the original biological
   2.453 +forms of information processing were chemical; there weren't brains
   2.454 +around, and still aren't in most microbes. And even when humans grow
   2.455 +their brains, the process of starting from a fertilized egg and
   2.456 +producing this rich and complex structure is, for much of the time,
   2.457 +under the control of chemical computations, chemical information
   2.458 +processing---of course combined with physical sorts of materials and
   2.459 +energy and so on as well.
   2.460 +
   2.461 +[26:25] So it would seem very strange if all that capability was
   2.462 +something thrown away when you've got a brain and all the information
   2.463 +processing, the [challenges that were handled in making a brain],
   2.464 +... This is handwaving on my part; I'm just saying that we /might/
   2.465 +learn that what brains do is not what we think they do, and that
   2.466 +problems of replicating them are not what we think they are, solely in
   2.467 +terms of numerical estimate of time scales, the number of components,
   2.468 +and so on.
   2.469 +
   2.470 +** Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations
   2.471 +[26:56] But apart from that, the other basis of skepticism concerns
   2.472 +how well we understand what the problems are. I think there are many
   2.473 +people who try to formalize the problems of designing an intelligent
   2.474 +system in terms of streams of information thought of as bit streams or
   2.475 +collections of bit streams, and they think of as the problems of
   2.476 +intelligence as being the construction or detection of patterns in
   2.477 +those, and perhaps not just detection of patterns, but detection of
   2.478 +patterns that are useable for sending /out/ streams to control motors
   2.479 +and so on in order to []. And that way of conceptualizing the problem
   2.480 +may lead on the one hand to oversimplification, so that the things
   2.481 +that /would/ be achieved, if those goals were achieved, maybe much
   2.482 +simpler, in some ways inadequate. Or the replication of human
   2.483 +intelligence, or the matching of human intelligence---or for that
   2.484 +matter, squirrel intelligence---but in another way, it may also make
   2.485 +the problem harder: it may be that some of the kinds of things that
   2.486 +biological evolution has achieved can't be done that way. And one of
   2.487 +the ways that might turn out to be the case is not because it's not
   2.488 +impossible in principle to do some of the information processing on
   2.489 +artificial computers-based-on-transistors and other bit-manipulating
   2.490 +[]---but it may just be that the computational complexity of solving
   2.491 +problems, processes, or finding solutions to complex problems, are
   2.492 +much greater and therefore you might need a much larger universe than
   2.493 +we have available in order to do things.
   2.494 +
   2.495 +** Example: find the shortest path by dangling strings
   2.496 +[28:55] Then if the underlying mechanisms were different, the
   2.497 +information processing mechanisms, they might be better tailored to
   2.498 +particular sorts of computation. There's a [] example, which is
   2.499 +finding the shortest route if you've got a collection of roads, and
   2.500 +they may be curved roads, and lots of tangled routes from A to B to C,
   2.501 +and so on. And if you start at A and you want to get to Z --- a place
   2.502 +somewhere on that map --- the process of finding the shortest route
   2.503 +will involve searching through all these different possibilities and
   2.504 +rejecting some that are longer than others and so on. But if you make
   2.505 +a model of that map out of string, where these strings are all laid
   2.506 +out on the maps and so have the lengths of the routes. Then if you
   2.507 +hold the two knots in the string -- it's a network of string --- which
   2.508 +correspond to the start point and end point, then /pull/, then the
   2.509 +bits of string that you're left with in a straight line will give you
   2.510 +the shortest route, and that process of pulling just gets you the
   2.511 +solution very rapidly in a parallel computation, where all the others
   2.512 +just hang by the wayside, so to speak.
   2.513 +
   2.514 +** In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.
   2.515 +[30:15] Now, I'm not saying brains can build networks of string and
   2.516 +pull them or anything like that; that's just an illustration of how if
   2.517 +you have the right representation, correctly implemented---or suitably
   2.518 +implemented---for a problem, then you can avoid very combinatorially
   2.519 +complex searches, which will maybe grow exponentially with the number
   2.520 +of components in your map, whereas with this thing, the time it takes
   2.521 +won't depend on how many strings you've [got on the map]; you just
   2.522 +pull, and it will depend only on the shortest route that exists in
   2.523 +there. Even if that shortest route wasn't obvious on the original map.
   2.524 +
   2.525 +
   2.526 +[30:59] So that's a rather long-winded way of formulating the
   2.527 +conjecture which---of supporting, a roundabout way of supporting the
   2.528 +conjecture that there may be something about the way molecules perform
   2.529 +computations where they have the combination of continuous change as
   2.530 +things move through space and come together and move apart, and
   2.531 +whatever --- and also snap into states that then persist, so [as you
   2.532 +learn from] quantum mechanics, you can have stable molecular
   2.533 +structures which are quite hard to separate, and then in catalytic
   2.534 +processes you can separate them, or extreme temperatures, or strong
   2.535 +forces, but they may nevertheless be able to move very rapidly in some
   2.536 +conditions in order to perform computations.
   2.537 +
   2.538 +[31:49] Now there may be things about that kind of structure that
   2.539 +enable searching for solutions to /certain/ classes of problems to be
   2.540 +done much more efficiently (by brain) than anything we could do with
   2.541 +computers. It's just an open question.
   2.542 +
   2.543 +[32:04] So it /might/ turn out that we need new kinds of technology
   2.544 +that aren't on the horizon in order to replicate the functions that
   2.545 +animal brains perform ---or, it might not. I just don't know. I'm not
   2.546 +claiming that there's strong evidence for that; I'm just saying that
   2.547 +it might turn out that way, partly because I think we know less than
   2.548 +many people think we know about what biological evolution achieved.
   2.549 +
   2.550 +[32:28] There are some other possibilities: we may just find out that
   2.551 +there are shortcuts no one ever thought of, and it will all happen
   2.552 +much more quickly---I have an open mind; I'd be surprised, but it
   2.553 +could turn up. There /is/ something that worries me much more than the
   2.554 +singularity that most people talk about, which is machines achieving
   2.555 +human-level intelligence and perhaps taking over [the] planet or
   2.556 +something. There's what I call the /singularity of cognitive
   2.557 +catch-up/ ...
   2.558 +
   2.559 +* A singularity of cognitive catch-up
   2.560 +
   2.561 +** What if it will take a lifetime to learn enough to make something new?
   2.562 +... SCC, singularity of cognitive catch-up, which I think we're close
   2.563 +to, or maybe have already reached---I'll explain what I mean by
   2.564 +that. One of the products of biological evolution---and this is one of
   2.565 +the answers to your earlier questions which I didn't get on to---is
   2.566 +that humans have not only the ability to make discoveries that none of
   2.567 +their ancestors have ever made, but to shorten the time required for
   2.568 +similar achievements to be reached by their offspring and their
   2.569 +descendants. So once we, for instance, worked out ways of complex
   2.570 +computations, or ways of building houses, or ways of finding our way
   2.571 +around, we don't need...our children don't need to work it out for
   2.572 +themselves by the same lengthy trial and error procedure; we can help
   2.573 +them get there much faster.
   2.574 +
   2.575 +Okay, well, what I've been referring to as the singularity of
   2.576 +cognitive catch-up depends on the fact that---fairly obvious, and it's
   2.577 +often been commented on---that in case of humans, it's not necessary
   2.578 +for each generation to learn what previous generations learned /in the
   2.579 +same way/. And we can speed up learning once something has been
   2.580 +learned, [it is able to] be learned by new people. And that has meant
   2.581 +that the social processes that support that kind of education of the
   2.582 +young can enormously accelerate what would have taken...perhaps
   2.583 +thousands [or] millions of years for evolution to produce, can happen in
   2.584 +a much shorter time. 
   2.585 +
   2.586 +
   2.587 +[34:54] But here's the catch: in order for a new advance to happen ---
   2.588 +so for something new to be discovered that wasn't there before, like
   2.589 +Newtonian mechanics, or the theory of relativity, or Beethoven's music
   2.590 +or [style] or whatever --- the individuals have to have traversed a
   2.591 +significant amount of what their ancestors have learned, even if they
   2.592 +do it much faster than their ancestors, to get to the point where they
   2.593 +can see the gaps, the possibilities for going further than their
   2.594 +ancestors, or their parents or whatever, have done.
   2.595 +
   2.596 +[35:27] Now in the case of knowledge of science, mathematics,
   2.597 +philosophy, engineering and so on, there's been a lot of accumulated
   2.598 +knowledge. And humans are living a /bit/ longer than they used to, but
   2.599 +they're still living for [whatever it is], a hundred years, or for
   2.600 +most people, less than that. So you can imagine that there might come
   2.601 +a time when in a normal human lifespan, it's not possible for anyone
   2.602 +to learn enough to understand the scope and limits of what's already
   2.603 +been achieved in order to see the potential for going beyond it and to
   2.604 +build on what's already been done to make that...those future steps.
   2.605 +
   2.606 +[36:10] So if we reach that stage, we will have reached the
   2.607 +singularity of cognitive catch-up because the process of education
   2.608 +that enables individuals to learn faster than their ancestors did is
   2.609 +the catching-up process, and it may just be that we at some point
   2.610 +reach a point where catching up can only happen within a lifetime of
   2.611 +an individual, and after that they're dead and they can't go
   2.612 +beyond. And I have some evidence that there's a lot of that around
   2.613 +because I see a lot of people coming up with what /they/ think of as
   2.614 +new ideas which they've struggled to come up with, but actually they
   2.615 +just haven't taken in some of what was...some of what was done [] by
   2.616 +other people, in other places before them. And I think that despite
   2.617 +the availability of search engines which make it /easier/ for people
   2.618 +to get the information---for instance, when I was a student, if I
   2.619 +wanted to find out what other people had done in the field, it was a
   2.620 +laborious process---going to the library, getting books, and
   2.621 +---whereas now, I can often do things in seconds that would have taken
   2.622 +hours. So that means that if seconds [are needed] for that kind of
   2.623 +work, my lifespan has been extended by a factor of ten or
   2.624 +something. So maybe that /delays/ the singularity, but it may not
   2.625 +delay it enough. But that's an open question; I don't know. And it may
   2.626 +just be that in some areas, this is more of a problem than others. For
   2.627 +instance, it may be that in some kinds of engineering, we're handing
   2.628 +over more and more of the work to machines anyways and they can go on
   2.629 +doing it.  So for instance, most of the production of computers now is
   2.630 +done by a computer-controlled machine---although some of the design
   2.631 +work is done by humans--- a lot of /detail/ of the design is done by
   2.632 +computers, and they produce the next generation, which then produces
   2.633 +the next generation, and so on.
   2.634 +
   2.635 +[37:57] I don't know if humans can go on having major advances, so
   2.636 +it'll be kind of sad if we can't.
   2.637 +
   2.638 +* Spatial reasoning: a difficult problem
   2.639 +
   2.640 +[38:15] Okay, well, there are different problems [ ] mathematics, and
   2.641 +they have to do with properties. So for instance a lot of mathematics
   2.642 +that can be expressed in terms of logical structures or algebraic
   2.643 +structures and those are pretty well suited for manipulation and...on
   2.644 +computers, and if a problem can be specified using the
   2.645 +logical/algebraic notation, and the solution method requires creating
   2.646 +something in that sort of notation, then computers are pretty good,
   2.647 +and there are lots of mathematical tools around---there are theorem
   2.648 +provers and theorem checkers, and all kinds of things, which couldn't
   2.649 +have existed fifty, sixty years ago, and they will continue getting
   2.650 +better.
   2.651 +
   2.652 +
   2.653 +But there was something that I was [[example-gap][alluding to earlier]] when I gave the
   2.654 +example of how you can reason about what you will see by changing your
   2.655 +position in relation to a door, where what you are doing is using your
   2.656 +grasp of spatial structures and how as one spatial relationship
   2.657 +changes namely you come closer to the door or move sideways and
   2.658 +parallel to the wall or whatever, other spatial relationships change
   2.659 +in parallel, so the lines from your eyes through to other parts of
   2.660 +the...parts of the room on the other side of the doorway change,
   2.661 +spread out more as you go towards  the doorway, and as you move
   2.662 +sideways, they don't spread out differently, but focus on different
   2.663 +parts of the internal ... that they access different parts of the
   2.664 +... of the room.
   2.665 +
   2.666 +Now, those are examples of ways of thinking about relationships and
   2.667 +changing relationships which are not the same as thinking about what
   2.668 +happens if I replace this symbol with that symbol, or if I substitute
   2.669 +this expression in that expression in a logical formula.  And at the
   2.670 +moment, I do not believe that there is anything in AI amongst the
   2.671 +mathematical reasoning community, the theorem-proving community, that
   2.672 +can model the processes that go on when a young child starts learning
   2.673 +to do Euclidean geometry and is taught things about---for instance, I
   2.674 +can give you a proof that the angles of any triangle add up to a
   2.675 +straight line, 180 degrees. 
   2.676 +
   2.677 +** Example: Spatial proof that the angles of any triangle add up to a half-circle
   2.678 +There are standard proofs which involves starting with one triangle,
   2.679 +then adding a line parallel to the base one of my former students,
   2.680 +Mary Pardoe, came up with which I will demonstrate with this <he holds
   2.681 +up a pen> --- can you see it? If I have a triangle here that's got
   2.682 +three sides, if I put this thing on it, on one side --- let's say the
   2.683 +bottom---I can rotate it until it lies along the second...another
   2.684 +side, and then maybe move it up to the other end ~. Then I can rotate
   2.685 +it again, until it lies on the third side, and move it back to the
   2.686 +other end. And then I'll rotate it again and it'll eventually end up
   2.687 +on the original side, but it will have changed the direction it's
   2.688 +pointing in --- and it won't have crossed over itself so it will have
   2.689 +gone through a half-circle, and that says that the three angles of a
   2.690 +triangle add up to the rotations of half a circle, which is a
   2.691 +beautiful kind of proof and almost anyone can understand it. Some
   2.692 +mathematicians don't like it, because they say it hides some of the
   2.693 +assumptions, but nevertheless, as far as I'm concerned, it's an
   2.694 +example of a human ability to do reasoning which, once you've
   2.695 +understood it, you can see will apply to any triangle --- it's got to
   2.696 +be a planar triangle --- not a triangle on a globe, because then the
   2.697 +angles can add up to more than ... you can have three /right/ angles
   2.698 +if you have an equator...a line on the equator, and a line going up to
   2.699 +to the north pole of the earth, and then you have a right angle and
   2.700 +then another line going down to the equator, and you have a right
   2.701 +angle, right angle, right angle, and they add up to more than a
   2.702 +straight line. But that's because the triangle isn't in the plane,
   2.703 +it's on a curved surface. In fact, that's one of the
   2.704 +differences...definitional differences you can take between planar and
   2.705 +curved surfaces: how much the angles of a triangle add up to. But our
   2.706 +ability to /visualize/ and notice the generality in that process, and
   2.707 +see that you're going to be able to do the same thing using triangles
   2.708 +that stretch in all sorts of ways, or if it's a million times as
   2.709 +large, or if it's made...you know, written on, on...if it's drawn in
   2.710 +different colors or whatever --- none of that's going to make any
   2.711 +difference to the essence of that process. And that ability to see
   2.712 +the commonality in a spatial structure which enables you to draw some
   2.713 +conclusions with complete certainty---subject to the possibility that
   2.714 +sometimes you make mistakes, but when you make mistakes, you can
   2.715 +discover them, as has happened in the history of geometrical theorem
   2.716 +proving. Imre Lakatos had a wonderful book called [[http://en.wikipedia.org/wiki/Proofs_and_Refutations][/Proofs and
   2.717 +Refutations/]] --- which I won't try to summarize --- but he has
   2.718 +examples: mistakes were made; that was because people didn't always
   2.719 +realize there were subtle subcases which had slightly different
   2.720 +properties, and they didn't take account of that. But once they're
   2.721 +noticed, you rectify that. 
   2.722 +
   2.723 +** Geometric results are fundamentally different than experimental results in chemistry or physics.
   2.724 +[43:28] But it's not the same as doing experiments in chemistry and
   2.725 +physics, where you can't be sure it'll be the same on [] or at a high
   2.726 +temperature, or in a very strong magnetic field --- with geometric
   2.727 +reasoning, in some sense you've got the full information in front of
   2.728 +you; even if you don't always notice an important part of it. So, that
   2.729 +kind of reasoning (as far as I know) is not implemented anywhere in a
   2.730 +computer. And most people who do research on trying to model
   2.731 +mathematical reasoning, don't pay any attention to that, because of
   2.732 +... they just don't think about it. They start from somewhere else,
   2.733 +maybe because of how they were educated. I was taught Euclidean
   2.734 +geometry at school. Were you?
   2.735 +
   2.736 +(Adam ford: Yeah)
   2.737 +
   2.738 +Many people are not now. Instead they're taught set theory, and
   2.739 +logic, and arithmetic, and [algebra], and so on. And so they don't use
   2.740 +that bit of their brains, without which we wouldn't have built any of
   2.741 +the cathedrals, and all sorts of things we now depend on.
   2.742 +
   2.743 +* Is near-term artificial general intelligence likely? 
   2.744 +
   2.745 +** Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.
   2.746 +
   2.747 +[44:35] Well, this relates to what's meant by general. And when I
   2.748 +first encountered the AGI community, I thought that what they all
   2.749 +meant by general intelligence was /uniform/ intelligence ---
   2.750 +intelligence based on some common simple (maybe not so simple, but)
   2.751 +single powerful mechanism or principle of inference. And there are
   2.752 +some people in the community who are trying to produce things like
   2.753 +that, often in connection with algorithmic information theory and
   2.754 +computability of information, and so on. But there's another sense of
   2.755 +general which means that the system of general intelligence can do
   2.756 +lots of different things, like perceive things, understand language,
   2.757 +move around, make things, and so on --- perhaps even enjoy a joke;
   2.758 +that's something that's not nearly on the horizon, as far as I
   2.759 +know. Enjoying a joke isn't the same as being able to make laughing
   2.760 +noises. 
   2.761 +
   2.762 +Given, then, that there are these two notions of general
   2.763 +intelligence---there's one that looks for one uniform, possibly
   2.764 +simple, mechanism or collection of ideas and notations and algorithms,
   2.765 +that will deal with any problem that's solvable --- and the other
   2.766 +that's general in the sense that it can do lots of different things
   2.767 +that are combined into an integrated architecture (which raises lots
   2.768 +of questions about how you combine these things and make them work
   2.769 +together) and we humans, certainly, are of the second kind: we do all
   2.770 +sorts of different things, and other animals also seem to be of the
   2.771 +second kind, perhaps not as general as humans. Now, it may turn out
   2.772 +that in some near future time, who knows---decades, a few
   2.773 +decades---you'll be able to get machines that are capable of solving
   2.774 +in a time that will depend on the nature of the problem, but any
   2.775 +problem that is solvable, and they will be able to do it in some sort
   2.776 +of tractable time --- of course, there are some problems that are
   2.777 +solvable that would require a larger universe and a longer history
   2.778 +than the history of the universe, but apart from that constraint,
   2.779 +these machines will be able to do anything [].  But to be able to do
   2.780 +some of the kinds of things that humans can do, like the kinds of
   2.781 +geometrical reasoning where you look at the shape and you abstract
   2.782 +away from the precise angles and sizes and shapes and so on, and
   2.783 +realize there's something general here, as must have happened when our
   2.784 +ancestors first made the discoveries that eventually put together in
   2.785 +Euclidean geometry. 
   2.786 +
   2.787 +It may be that that requires mechanisms of a kind that we don't know
   2.788 +anything about at the moment. Maybe brains are using molecules and
   2.789 +rearranging molecules in some way that supports that kind of
   2.790 +reasoning. I'm not saying they are --- I don't know, I just don't see
   2.791 +any simple...any obvious way to map that kind of reasoning capability
   2.792 +onto what we currently do on computers. There is---and I just
   2.793 +mentioned this briefly beforehand---there is a kind of thing that's
   2.794 +sometimes thought of as a major step in that direction, namely you can
   2.795 +build a machine (or a software system) that can represent some
   2.796 +geometrical structure, and then be told about some change that's going
   2.797 +to happen to it, and it can predict in great detail what'll
   2.798 +happen. And this happens for instance in game engines, where you say
   2.799 +we have all these blocks on the table and I'll drop one other block,
   2.800 +and then [the thing] uses Newton's laws and properties of rigidity of
   2.801 +the parts and the elasticity and also stuff about geometries and space
   2.802 +and so on, to give you a very accurate representation of what'll
   2.803 +happen when this brick lands on this pile of things, [it'll bounce and
   2.804 +go off, and so on]. And you just, with more memory and more CPU power,
   2.805 +you can increase the accuracy--- but that's totally different than
   2.806 +looking at /one/ example, and working out what will happen in a whole
   2.807 +/range/ of cases at a higher level of abstraction, whereas the game
   2.808 +engine does it in great detail for /just/ this case, with /just/ those
   2.809 +precise things, and it won't even know what the generalizations are
   2.810 +that it's using that would apply to others []. So, in that sense, [we]
   2.811 +may get AGI --- artificial general intelligence --- pretty soon, but
   2.812 +it'll be limited in what it can do. And the other kind of general
   2.813 +intelligence which combines all sorts of different things, including
   2.814 +human spatial geometrical reasoning, and maybe other things, like the
   2.815 +ability to find things funny, and to appreciate artistic features and
   2.816 +other things may need forms of pattern-mechanism, and I have an open
   2.817 +mind about that.
   2.818 +
   2.819 +* Abstract General Intelligence impacts
   2.820 +
   2.821 +[49:53] Well, as far as the first type's concerned, it could be useful
   2.822 +for all kinds of applications --- there are people who worry about
   2.823 +where there's a system that has that type of intelligence, might in
   2.824 +some sense take over control of the planet. Well, humans often do
   2.825 +stupid things, and they might do something stupid that would lead to
   2.826 +disaster, but I think it's more likely that there would be other
   2.827 +things [] lead to disaster--- population problems, using up all the
   2.828 +resources, destroying ecosystems, and whatever. But certainly it would
   2.829 +go on being useful to have these calculating devices. Now, as for the
   2.830 +second kind of them, I don't know---if we succeeded at putting
   2.831 +together all the parts that we find in humans, we might just make an
   2.832 +artificial human, and then we might have some of them as your friends,
   2.833 +and some of them we might not like, and some of them might become
   2.834 +teachers or whatever, composers --- but that raises a question: could
   2.835 +they, in some sense, be superior to us, in their learning
   2.836 +capabilities, their understanding of human nature, or maybe their
   2.837 +wickedness or whatever --- these are all issues in which I expect the
   2.838 +best science fiction writers would give better answers than anything I
   2.839 +could do, but I did once fantasize when I [back] in 1978, that perhaps
   2.840 +if we achieved that kind of thing, that they would be wise, and gentle
   2.841 +and kind, and realize that humans are an inferior species that, you
   2.842 +know, have some good features, so they'd keep us in some kind of
   2.843 +secluded...restrictive kind of environment, keep us away from
   2.844 +dangerous weapons, and so on. And find ways of cohabitating with
   2.845 +us. But that's just fantasy.
   2.846 +
   2.847 +Adam Ford: Awesome. Yeah, there's an interesting story /With Folded
   2.848 +Hands/ where [the computers] want to take care of us and want to
   2.849 +reduce suffering and end up lobotomizing everybody [but] keeping them
   2.850 +alive so as to reduce the suffering. 
   2.851 +
   2.852 +Aaron Sloman: Not all that different from /Brave New World/, where it
   2.853 +was done with drugs and so on, but different humans are given
   2.854 +different roles in that system, yeah.
   2.855 +
   2.856 +There's also /The Time Machine/, H.G. Wells, where the ... in the
   2.857 +distant future, humans have split in two: the Eloi, I think they were
   2.858 +called, they lived underground, they were the [] ones, and then---no,
   2.859 +the Morlocks lived underground; Eloi lived on the planet; they were
   2.860 +pleasant and pretty but not very bright, and so on, and they were fed
   2.861 +on by ...
   2.862 +
   2.863 +Adam Ford: [] in the future.
   2.864 +
   2.865 +Aaron Sloman: As I was saying, if you ask science fiction writers,
   2.866 +you'll probably come up with a wide variety of interesting answers. 
   2.867 +
   2.868 +Adam Ford: I certainly have; I've spoken to [] of Birmingham, and
   2.869 +Sean Williams, ... who else? 
   2.870 +
   2.871 +Aaron Sloman: Did you ever read a story by E.M. Forrester called /The
   2.872 +Machine Stops/ --- very short story, it's [[http://archive.ncsa.illinois.edu/prajlich/forster.html][on the Internet somewhere]]
   2.873 +--- it's about a time when people sitting ... and this was written in
   2.874 +about [1914 ] so it's about...over a hundred years ago ... people are
   2.875 +in their rooms, they sit in front of screens, and they type things,
   2.876 +and they communicate with one another that way, and they don't meet;
   2.877 +they have debates, and they give lectures to their audiences that way,
   2.878 +and then there's a woman whose son says \ldquo{}I'd like to see
   2.879 +you\rdquo{} and she says \ldquo{}What's the point? You've got me at
   2.880 +this point \rdquo{} but he wants to come and talk to her --- I won't
   2.881 +tell you how it ends, but.
   2.882 +
   2.883 +Adam Ford: Reminds me of the Internet.
   2.884 +
   2.885 +Aaron Sloman: Well, yes; he invented ... it was just extraordinary
   2.886 +that he was able to do that, before most of the components that we
   2.887 +need for it existed.
   2.888 +
   2.889 +Adam Ford: [Another person who did that] was Vernor Vinge [] /True
   2.890 +Names/. 
   2.891 +
   2.892 +Aaron Sloman: When was that written?
   2.893 +
   2.894 +Adam Ford: The seventies.
   2.895 +
   2.896 +Aaron Sloman: Okay, well a lot of the technology was already around
   2.897 +then. The original bits of internet were working, in about 1973, I was
   2.898 +sitting ... 1974, I was sitting at Sussex University trying to
   2.899 +use...learn LOGO, the programming language, to decide whether it was
   2.900 +going to be useful for teaching AI, and I was sitting [] paper
   2.901 +teletype, there was paper coming out, transmitting ten characters a
   2.902 +second from Sussex to UCL computer lab by telegraph cable, from there
   2.903 +to somewhere in Norway via another cable, from there by satellite to
   2.904 +California to a computer Xerox [] research center where they had
   2.905 +implemented a computer with a LOGO system on it, with someone I had
   2.906 +met previously in Edinburgh, Danny Bobrow, and he allowed me to have
   2.907 +access to this sytem. So there I was typing. And furthermore, it was
   2.908 +duplex typing, so every character I typed didn't show up on my
   2.909 +terminal until it had gone all the way there and echoed back, so I
   2.910 +would type, and the characters would come back four seconds later.
   2.911 +
   2.912 +[55:26] But that was the Internet, and I think Vernor Vinge was
   2.913 +writing after that kind of thing had already started, but I don't
   2.914 +know. Anyway.
   2.915 +
   2.916 +[55:41] Another...I mentioned H.G. Wells, /The Time Machine/. I
   2.917 +recently discovered, because [[http://en.wikipedia.org/wiki/David_Lodge_(author)][David Lodge]] had written a sort of
   2.918 +semi-novel about him, that he had invented Wikipedia, in advance--- he
   2.919 +had this notion of an encyclopedia that was free to everybody, and
   2.920 +everybody could contribute and [collaborate on it]. So, go to the
   2.921 +science fiction writers to find out the future --- well, a range of
   2.922 +possible futures.
   2.923 +
   2.924 +Adam Ford: Well the thing is with science fiction writers, they have
   2.925 +to maintain some sort of interest for their readers, after all the
   2.926 +science fiction which reaches us is the stuff that publishers want to
   2.927 +sell, and so there's a little bit of a ... a bias towards making a
   2.928 +plot device there, and so the dramatic sort of appeals to our
   2.929 +amygdala, our lizard brain; we'll sort of stay there obviously to some
   2.930 +extent. But I think that they do come up with sort of amazing ideas; I
   2.931 +think it's worth trying to make these predictions; I think that we
   2.932 +should more time on strategic forecasting, I mean take that seriously.
   2.933 +
   2.934 +Aaron Sloman: Well, I'm happy to leave that to others; I just want to
   2.935 +try to understand these problems that bother me about how things
   2.936 +work. And it may be that some would say that's irresponsible if I
   2.937 +don't think about what the implications will be. Well, understanding
   2.938 +how humans work /might/ enable us to make [] humans --- I suspect it
   2.939 +wont happen in this century; I think it's going to be too difficult.