diff org/sloman-old.html @ 109:414a10d51d9f

stuff from dylan?
author Robert McIntyre <rlm@mit.edu>
date Tue, 03 Jun 2014 13:24:58 -0400
parents
children
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/org/sloman-old.html	Tue Jun 03 13:24:58 2014 -0400
     1.3 @@ -0,0 +1,1348 @@
     1.4 +<?xml version="1.0" encoding="utf-8"?>
     1.5 +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
     1.6 +               "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
     1.7 +<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
     1.8 +<head>
     1.9 +<title>Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview</title>
    1.10 +<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
    1.11 +<meta name="title" content="Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview"/>
    1.12 +<meta name="generator" content="Org-mode"/>
    1.13 +<meta name="generated" content="2013-10-04 18:49:53 UTC"/>
    1.14 +<meta name="author" content="Dylan Holmes"/>
    1.15 +<meta name="description" content=""/>
    1.16 +<meta name="keywords" content=""/>
    1.17 +<style type="text/css">
    1.18 + <!--/*--><![CDATA[/*><!--*/
    1.19 +  html { font-family: Times, serif; font-size: 12pt; }
    1.20 +  .title  { text-align: center; }
    1.21 +  .todo   { color: red; }
    1.22 +  .done   { color: green; }
    1.23 +  .tag    { background-color: #add8e6; font-weight:normal }
    1.24 +  .target { }
    1.25 +  .timestamp { color: #bebebe; }
    1.26 +  .timestamp-kwd { color: #5f9ea0; }
    1.27 +  .right  {margin-left:auto; margin-right:0px;  text-align:right;}
    1.28 +  .left   {margin-left:0px;  margin-right:auto; text-align:left;}
    1.29 +  .center {margin-left:auto; margin-right:auto; text-align:center;}
    1.30 +  p.verse { margin-left: 3% }
    1.31 +  pre {
    1.32 +	border: 1pt solid #AEBDCC;
    1.33 +	background-color: #F3F5F7;
    1.34 +	padding: 5pt;
    1.35 +	font-family: courier, monospace;
    1.36 +        font-size: 90%;
    1.37 +        overflow:auto;
    1.38 +  }
    1.39 +  table { border-collapse: collapse; }
    1.40 +  td, th { vertical-align: top;  }
    1.41 +  th.right  { text-align:center;  }
    1.42 +  th.left   { text-align:center;   }
    1.43 +  th.center { text-align:center; }
    1.44 +  td.right  { text-align:right;  }
    1.45 +  td.left   { text-align:left;   }
    1.46 +  td.center { text-align:center; }
    1.47 +  dt { font-weight: bold; }
    1.48 +  div.figure { padding: 0.5em; }
    1.49 +  div.figure p { text-align: center; }
    1.50 +  div.inlinetask {
    1.51 +    padding:10px;
    1.52 +    border:2px solid gray;
    1.53 +    margin:10px;
    1.54 +    background: #ffffcc;
    1.55 +  }
    1.56 +  textarea { overflow-x: auto; }
    1.57 +  .linenr { font-size:smaller }
    1.58 +  .code-highlighted {background-color:#ffff00;}
    1.59 +  .org-info-js_info-navigation { border-style:none; }
    1.60 +  #org-info-js_console-label { font-size:10px; font-weight:bold;
    1.61 +                               white-space:nowrap; }
    1.62 +  .org-info-js_search-highlight {background-color:#ffff00; color:#000000;
    1.63 +                                 font-weight:bold; }
    1.64 +  /*]]>*/-->
    1.65 +</style>
    1.66 +<link rel="stylesheet" type="text/css" href="../css/sloman.css" /> 
    1.67 +<script type="text/javascript">
    1.68 +<!--/*--><![CDATA[/*><!--*/
    1.69 + function CodeHighlightOn(elem, id)
    1.70 + {
    1.71 +   var target = document.getElementById(id);
    1.72 +   if(null != target) {
    1.73 +     elem.cacheClassElem = elem.className;
    1.74 +     elem.cacheClassTarget = target.className;
    1.75 +     target.className = "code-highlighted";
    1.76 +     elem.className   = "code-highlighted";
    1.77 +   }
    1.78 + }
    1.79 + function CodeHighlightOff(elem, id)
    1.80 + {
    1.81 +   var target = document.getElementById(id);
    1.82 +   if(elem.cacheClassElem)
    1.83 +     elem.className = elem.cacheClassElem;
    1.84 +   if(elem.cacheClassTarget)
    1.85 +     target.className = elem.cacheClassTarget;
    1.86 + }
    1.87 +/*]]>*///-->
    1.88 +</script>
    1.89 +
    1.90 +</head>
    1.91 +<body>
    1.92 +
    1.93 +
    1.94 +<div id="content">
    1.95 +<h1 class="title">Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview</h1>
    1.96 +
    1.97 +
    1.98 +<blockquote>
    1.99 +
   1.100 +
   1.101 +
   1.102 +
   1.103 +
   1.104 +
   1.105 +
   1.106 +
   1.107 +
   1.108 +
   1.109 +
   1.110 +
   1.111 +
   1.112 +
   1.113 +
   1.114 +
   1.115 +<p>
   1.116 +<b>Editor's note:</b> This is a working draft transcript which I made of
   1.117 +<a href="http://www.youtube.com/watch?feature=player_detailpage&amp;v=iuH8dC7Snno">this nice interview</a> of Aaron Sloman. Having just finished one
   1.118 +iteration of transcription, I still need to go in and clean up the
   1.119 +formatting and fix the parts that I misheard, so you can expect the
   1.120 +text to improve significantly in the near future.
   1.121 +</p>
   1.122 +<p>
   1.123 +To the extent that this is my work, you have my permission to make
   1.124 +copies of this transcript for your own purposes. Also, feel free to
   1.125 +e-mail me with comments or corrections.
   1.126 +</p>
   1.127 +<p>
   1.128 +You can send mail to <code>transcript@aurellem.org</code>.
   1.129 +</p>
   1.130 +<p>
   1.131 +Cheers,
   1.132 +</p>
   1.133 +<p>
   1.134 +&mdash;Dylan
   1.135 +</p>
   1.136 +</blockquote>
   1.137 +
   1.138 +
   1.139 +
   1.140 +
   1.141 +
   1.142 +<div id="table-of-contents">
   1.143 +<h2>Table of Contents</h2>
   1.144 +<div id="text-table-of-contents">
   1.145 +<ul>
   1.146 +<li><a href="#sec-1">1 Introduction</a>
   1.147 +<ul>
   1.148 +<li><a href="#sec-1-1">1.1 Aaron Sloman evolves into a philosopher of AI</a></li>
   1.149 +<li><a href="#sec-1-2">1.2 AI is hard, in part because there are tempting non-problems.</a></li>
   1.150 +</ul>
   1.151 +</li>
   1.152 +<li><a href="#sec-2">2 What problems of intelligence did evolution solve?</a>
   1.153 +<ul>
   1.154 +<li><a href="#sec-2-1">2.1 Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.</a></li>
   1.155 +<li><a href="#sec-2-2">2.2 Speculation about how communication might have evolved from internal lanagues.</a></li>
   1.156 +</ul>
   1.157 +</li>
   1.158 +<li><a href="#sec-3">3 How do language and internal states relate to AI?</a>
   1.159 +<ul>
   1.160 +<li><a href="#sec-3-1">3.1 In AI, false assumptions can lead investigators astray.</a></li>
   1.161 +<li><a href="#sec-3-2">3.2 Example: Vision is not just about finding surfaces, but about finding affordances.</a></li>
   1.162 +<li><a href="#sec-3-3">3.3 Online and offline intelligence</a></li>
   1.163 +<li><a href="#sec-3-4">3.4 Example: Even toddlers use sophisticated geometric knowledge</a></li>
   1.164 +</ul>
   1.165 +</li>
   1.166 +<li><a href="#sec-4">4 Animal intelligence</a>
   1.167 +<ul>
   1.168 +<li><a href="#sec-4-1">4.1 The priority is <i>cataloguing</i> what competences have evolved, not ranking them.</a></li>
   1.169 +<li><a href="#sec-4-2">4.2 AI can be used to test philosophical theories</a></li>
   1.170 +</ul>
   1.171 +</li>
   1.172 +<li><a href="#sec-5">5 Is abstract general intelligence feasible?</a>
   1.173 +<ul>
   1.174 +<li><a href="#sec-5-1">5.1 It's misleading to compare the brain and its neurons to a computer made of transistors</a></li>
   1.175 +<li><a href="#sec-5-2">5.2 For example, brains may rely heavily on chemical information processing</a></li>
   1.176 +<li><a href="#sec-5-3">5.3 Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations</a></li>
   1.177 +<li><a href="#sec-5-4">5.4 Example: find the shortest path by dangling strings</a></li>
   1.178 +<li><a href="#sec-5-5">5.5 In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.</a></li>
   1.179 +</ul>
   1.180 +</li>
   1.181 +<li><a href="#sec-6">6 A singularity of cognitive catch-up</a>
   1.182 +<ul>
   1.183 +<li><a href="#sec-6-1">6.1 What if it will take a lifetime to learn enough to make something new?</a></li>
   1.184 +</ul>
   1.185 +</li>
   1.186 +<li><a href="#sec-7">7 Spatial reasoning: a difficult problem</a>
   1.187 +<ul>
   1.188 +<li><a href="#sec-7-1">7.1 Example: Spatial proof that the angles of any triangle add up to a half-circle</a></li>
   1.189 +<li><a href="#sec-7-2">7.2 Geometric results are fundamentally different than experimental results in chemistry or physics.</a></li>
   1.190 +</ul>
   1.191 +</li>
   1.192 +<li><a href="#sec-8">8 Is near-term artificial general intelligence likely?</a>
   1.193 +<ul>
   1.194 +<li><a href="#sec-8-1">8.1 Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.</a></li>
   1.195 +</ul>
   1.196 +</li>
   1.197 +<li><a href="#sec-9">9 Abstract General Intelligence impacts</a></li>
   1.198 +</ul>
   1.199 +</div>
   1.200 +</div>
   1.201 +
   1.202 +<div id="outline-container-1" class="outline-2">
   1.203 +<h2 id="sec-1"><span class="section-number-2">1</span> Introduction</h2>
   1.204 +<div class="outline-text-2" id="text-1">
   1.205 +
   1.206 +
   1.207 +
   1.208 +</div>
   1.209 +
   1.210 +<div id="outline-container-1-1" class="outline-3">
   1.211 +<h3 id="sec-1-1"><span class="section-number-3">1.1</span> Aaron Sloman evolves into a philosopher of AI</h3>
   1.212 +<div class="outline-text-3" id="text-1-1">
   1.213 +
   1.214 +<p>[0:09] My name is Aaron Sloman. My first degree many years ago in
   1.215 +Capetown University was in Physics and Mathematics, and I intended to
   1.216 +go and be a mathematician. I came to Oxford and encountered
   1.217 +philosophers &mdash; I had started reading philosophy and discussing
   1.218 +philosophy before then, and then I found that there were philosophers
   1.219 +who said things about mathematics that I thought were wrong, so
   1.220 +gradually got more and more involved in [philosophy] discussions and
   1.221 +switched to doing philosophy DPhil. Then I became a philosophy
   1.222 +lecturer and about six years later, I was introduced to artificial
   1.223 +intelligence when I was a lecturer at Sussex University in philosophy
   1.224 +and I very soon became convinced that the best way to make progress in
   1.225 +both areas of philosophy (including philosophy of mathematics which I
   1.226 +felt i hadn't dealt with adequately in my DPhil) about the philosophy
   1.227 +of mathematics, philosophy of mind, philsophy of language and all
   1.228 +those things&mdash;the best way was to try to design and test working
   1.229 +fragments of mind and maybe eventually put them all together but
   1.230 +initially just working fragments that would do various things.
   1.231 +</p>
   1.232 +<p>
   1.233 +[1:12] And I learned to program and ~ with various other people
   1.234 +including ~Margaret Boden whom you've interviewed, developed&mdash;helped
   1.235 +develop an undergraduate degree in AI and other things and also began
   1.236 +to do research in AI and so on which I thought of as doing philosophy,
   1.237 +primarily.
   1.238 +</p>
   1.239 +<p>
   1.240 +[1:29] And then I later moved to the University of Birmingham and I
   1.241 +was there &mdash; I came in 1991 &mdash; and I've been retired for a while but
   1.242 +I'm not interested in golf or gardening so I just go on doing full
   1.243 +time research and my department is happy to keep me on without paying
   1.244 +me and provide space and resources and I come, meeting bright people
   1.245 +at conferences and try to learn and make progress if I can.
   1.246 +</p>
   1.247 +</div>
   1.248 +
   1.249 +</div>
   1.250 +
   1.251 +<div id="outline-container-1-2" class="outline-3">
   1.252 +<h3 id="sec-1-2"><span class="section-number-3">1.2</span> AI is hard, in part because there are tempting non-problems.</h3>
   1.253 +<div class="outline-text-3" id="text-1-2">
   1.254 +
   1.255 +
   1.256 +<p>
   1.257 +One of the things I learnt and understood more and more over the many
   1.258 +years &mdash; forty years or so since I first encountered AI &mdash; is how
   1.259 +hard the problems are, and in part that's because it's very often
   1.260 +tempting to <i>think</i> the problem is something different from what it
   1.261 +actually is, and then people design solutions to the non-problems, and
   1.262 +I think of most of my work now as just helping to clarify what the
   1.263 +problems are: what is it that we're trying to explain &mdash; and maybe
   1.264 +this is leading into what you wanted to talk about:
   1.265 +</p>
   1.266 +<p>
   1.267 +I now think that one of the ways of getting a deep understanding of
   1.268 +that is to find out what were the problems that biological evolution
   1.269 +solved, because we are a product of <i>many</i> solutions to <i>many</i>
   1.270 +problems, and if we just try to go in and work out what the whole
   1.271 +system is doing, we may get it all wrong, or badly wrong.
   1.272 +</p>
   1.273 +
   1.274 +</div>
   1.275 +</div>
   1.276 +
   1.277 +</div>
   1.278 +
   1.279 +<div id="outline-container-2" class="outline-2">
   1.280 +<h2 id="sec-2"><span class="section-number-2">2</span> What problems of intelligence did evolution solve?</h2>
   1.281 +<div class="outline-text-2" id="text-2">
   1.282 +
   1.283 +
   1.284 +
   1.285 +</div>
   1.286 +
   1.287 +<div id="outline-container-2-1" class="outline-3">
   1.288 +<h3 id="sec-2-1"><span class="section-number-3">2.1</span> Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.</h3>
   1.289 +<div class="outline-text-3" id="text-2-1">
   1.290 +
   1.291 +
   1.292 +<p>
   1.293 +[2:57] Well, first I would challenge that we are the dominant
   1.294 +species. I know it looks like that but actually if you count biomass,
   1.295 +if you count number of species, if you count number of individuals,
   1.296 +the dominant species are microbes &mdash; maybe not one of them but anyway
   1.297 +they're the ones who dominate in that sense, and furthermore we are
   1.298 +mostly &mdash; we are largely composed of microbes, without which we
   1.299 +wouldn't survive.
   1.300 +</p>
   1.301 +
   1.302 +<p>
   1.303 +[3:27] But there are things that make humans (you could say) best at
   1.304 +those things, or worst at those things, but it's a combination.  And I
   1.305 +think it was a collection of developments of which there isn't any
   1.306 +single one. [] there might be, some people say, human language which
   1.307 +changed everything. By our human language, they mean human
   1.308 +communication in words, but I think that was a later development from
   1.309 +what must have started as the use of <i>internal</i> forms of
   1.310 +representation &mdash; which are there in nest-building birds, in
   1.311 +pre-verbal children, in hunting mammals &mdash; because you can't take in
   1.312 +information about a complex structured environment in which things can
   1.313 +change and you may have to be able to work out what's possible and
   1.314 +what isn't possible, without having some way of representing the
   1.315 +components of the environment, their relationships, the kinds of
   1.316 +things they can and can't do, the kinds of things you might or might
   1.317 +not be able to do &mdash; and <i>that</i> kind of capability needs internal
   1.318 +languages, and I and colleagues [at Birmingham] have been referring to
   1.319 +them as generalized languages because some people object to
   1.320 +referring&hellip;to using language to refer to something that isn't used
   1.321 +for communication. But from that viewpoint, not only humans but many
   1.322 +other animals developed abilities to do things to their environment to
   1.323 +make them more friendly to themselves, which depended on being able to
   1.324 +represent possible futures, possible actions, and work out what's the
   1.325 +best thing to do.
   1.326 +</p>
   1.327 +<p>
   1.328 +[5:13] And nest-building in corvids for instance&mdash;crows, magpies,
   1.329 + [hawks], and so on &mdash; are way beyond what current robots can do, and
   1.330 + in fact I think most humans would be challenged if they had to go and
   1.331 + find a collection of twigs, one at a time, maybe bring them with just
   1.332 + one hand &mdash; or with your mouth &mdash; and assemble them into a
   1.333 + structure that, you know, is shaped like a nest, and is fairly rigid,
   1.334 + and you could trust your eggs in them when wind blows. But they're
   1.335 + doing it, and so &hellip; they're not our evolutionary ancestors, but
   1.336 + they're an indication &mdash; and that example is an indication &mdash; of
   1.337 + what must have evolved in order to provide control over the
   1.338 + environment in <i>that</i> species.
   1.339 +</p>
   1.340 +</div>
   1.341 +
   1.342 +</div>
   1.343 +
   1.344 +<div id="outline-container-2-2" class="outline-3">
   1.345 +<h3 id="sec-2-2"><span class="section-number-3">2.2</span> Speculation about how communication might have evolved from internal lanagues.</h3>
   1.346 +<div class="outline-text-3" id="text-2-2">
   1.347 +
   1.348 +<p>[5:56] And I think hunting mammals, fruit-picking mammals, mammals
   1.349 +that can rearrange parts of the environment, provide shelters, needed
   1.350 +to have &hellip;. also needed to have ways of representing possible
   1.351 +futures, not just what's there in the environment. I think at a later
   1.352 +stage, that developed into a form of communication, or rather the
   1.353 +<i>internal</i> forms of representation became usable as a basis for
   1.354 +providing [context] to be communicated. And that happened, I think,
   1.355 +initially through performing actions that expressed intentions, and
   1.356 +probably led to situtations where an action (for instance, moving some
   1.357 +large object) was performed more easily, or more successfully, or more
   1.358 +accurately if it was done collaboratively. So someone who had worked
   1.359 +out what to do might start doing it, and then a conspecific might be
   1.360 +able to work out what the intention is, because that person has the
   1.361 +<i>same</i> forms of representation and can build theories about what's
   1.362 +going on, and might then be able to help.
   1.363 +</p>
   1.364 +<p>
   1.365 +[7:11] You can imagine that if that started happening more (a lot of
   1.366 +collaboration based on inferred intentions and plans) then sometimes
   1.367 +the inferences might be obscure and difficult, so the <i>actions</i> might
   1.368 +be enhanced to provide signals as to what the intention is, and what
   1.369 +the best way is to help, and so on.
   1.370 +</p>
   1.371 +<p>
   1.372 +[7:35] So, this is all handwaving and wild speculation, but I think
   1.373 +it's consistent with a large collection of facts which one can look at
   1.374 +&mdash; and find if one looks for them, but one won't know if [some]one
   1.375 +doesn't look for them &mdash; about the way children, for instance, who
   1.376 +can't yet talk, communicate, and the things they'll do, like going to
   1.377 +the mother and turning the face to point in the direction where the
   1.378 +child wants it to look and so on; that's an extreme version of action
   1.379 +indicating intention.
   1.380 +</p>
   1.381 +<p>
   1.382 +[8:03] Anyway. That's a very long roundabout answer to one conjecture
   1.383 +that the use of communicative language is what gave humans their
   1.384 +unique power to create and destroy and whatever, and I'm saying that
   1.385 +if by that you mean <i>communicative</i> language, then I'm saying there
   1.386 +was something before that which was <i>non</i>-communicative language, and I
   1.387 +suspect that noncommunicative language continues to play a deep role
   1.388 +in <i>all</i> human perception &mdash;in mathematical and scientific reasoning, in
   1.389 +problem solving &mdash; and we don't understand very much about it.
   1.390 +</p>
   1.391 +<p>
   1.392 +[8:48]
   1.393 +I'm sure there's a lot more to be said about the development of
   1.394 +different kinds of senses, the development of brain structures and
   1.395 +mechanisms is above all that, but perhaps I've droned on long enough
   1.396 +on that question.
   1.397 +</p>
   1.398 +
   1.399 +</div>
   1.400 +</div>
   1.401 +
   1.402 +</div>
   1.403 +
   1.404 +<div id="outline-container-3" class="outline-2">
   1.405 +<h2 id="sec-3"><span class="section-number-2">3</span> How do language and internal states relate to AI?</h2>
   1.406 +<div class="outline-text-2" id="text-3">
   1.407 +
   1.408 +
   1.409 +<p>
   1.410 +[9:09] Well, I think most of the human and animal capabilities that
   1.411 +I've been referring to are not yet to be found in current robots or
   1.412 +[computing] systems, and I think there are two reasons for that: one
   1.413 +is that it's intrinsically very difficult; I think that in particular
   1.414 +it may turn out that the forms of information processing that one can
   1.415 +implement on digital computers as we currently know them may not be as
   1.416 +well suited to performing some of these tasks as other kinds of
   1.417 +computing about which we don't know so much &mdash; for example, I think
   1.418 +there may be important special features about <i>chemical</i> computers
   1.419 +which we might [talk about in a little bit? find out about]. 
   1.420 +</p>
   1.421 +
   1.422 +</div>
   1.423 +
   1.424 +<div id="outline-container-3-1" class="outline-3">
   1.425 +<h3 id="sec-3-1"><span class="section-number-3">3.1</span> In AI, false assumptions can lead investigators astray.</h3>
   1.426 +<div class="outline-text-3" id="text-3-1">
   1.427 +
   1.428 +<p>[9:57] So, one of the problems then is that the tasks are hard &hellip; but
   1.429 +there's a deeper problem as to why AI hasn't made a great deal of
   1.430 +progress on these problems that I'm talking about, and that is that
   1.431 +most AI researchers assume things&mdash;and this is not just AI
   1.432 +researchers, but [also] philsophers, and psychologists, and people
   1.433 +studying animal behavior&mdash;make assumptions about what it is that
   1.434 +animals or humans do, for instance make assumptions about what vision
   1.435 +is for, or assumptions about what motivation is and how motivation
   1.436 +works, or assumptions about how learning works, and then they try ---
   1.437 +the AI people try &mdash; to model [or] build systems that perform those
   1.438 +assumed functions. So if you get the <i>functions</i> wrong, then even if
   1.439 +you implement some of the functions that you're trying to implement,
   1.440 +they won't necessarily perform the tasks that the initial objective
   1.441 +was to imitate, for instance the tasks that humans, and nest-building
   1.442 +birds, and monkeys and so on can perform. 
   1.443 +</p>
   1.444 +</div>
   1.445 +
   1.446 +</div>
   1.447 +
   1.448 +<div id="outline-container-3-2" class="outline-3">
   1.449 +<h3 id="sec-3-2"><span class="section-number-3">3.2</span> Example: Vision is not just about finding surfaces, but about finding affordances.</h3>
   1.450 +<div class="outline-text-3" id="text-3-2">
   1.451 +
   1.452 +<p>[11:09] I'll give you a simple example &mdash; well, maybe not so simple,
   1.453 +but &mdash; It's often assumed that the function of vision in humans (and
   1.454 +in other animals with good eyesight and so on) is to take in optical
   1.455 +information that hits the retina, and form into the (maybe changing
   1.456 +&mdash; or, really, in our case definitely changing) patterns of
   1.457 +illumination where there are sensory receptors that detect those
   1.458 +patterns, and then somehow from that information (plus maybe other
   1.459 +information gained from head movement or from comparisons between two
   1.460 +eyes) to work out what there was in the environment that produced
   1.461 +those patterns, and that is often taken to mean &ldquo;where were the
   1.462 +surfaces off which the light bounced before it came to me&rdquo;. So
   1.463 +you essentially think of the task of the visual system as being to
   1.464 +reverse the image formation process: so the 3D structure's there, the
   1.465 +lens causes the image to form in the retina, and then the brain goes
   1.466 +back to a model of that 3D structure there. That's a very plausible
   1.467 +theory about vision, and it may be that that's a <i>subset</i> of what
   1.468 +human vision does, but I think James Gibson pointed out that that kind
   1.469 +of thing is not necessarily going to be very useful for an organism,
   1.470 +and it's very unlikely that that's the main function of perception in
   1.471 +general, namely to produce some physical description of what's out
   1.472 +there.
   1.473 +</p>
   1.474 +<p>
   1.475 +[12:37] What does an animal <i>need</i>? It needs to know what it can do,
   1.476 +what it can't do, what the consequences of its actions will be
   1.477 +&hellip;. so, he introduced the word <i>affordance</i>, so from his point of
   1.478 +view, the function of vision, perception, are to inform the organism
   1.479 +of what the <i>affordances</i> are for action, where that would mean what
   1.480 +the animal, <i>given</i> its morphology (what it can do with its mouth, its
   1.481 +limbs, and so on, and the ways it can move) what it can do, what its
   1.482 +needs are, what the obstacles are, and how the environment supports or
   1.483 +obstructs those possible actions.
   1.484 +</p>
   1.485 +<p>
   1.486 +[13:15] And that's a very different collection of information
   1.487 +structures that you need from, say, &ldquo;where are all the
   1.488 +surfaces?&rdquo;: if you've got all the surfaces, <i>deriving</i> the
   1.489 +affordances would still be a major task. So, if you think of the
   1.490 +perceptual system as primarily (for biological organisms) being
   1.491 +devices that provide information about affordances and so on, then the
   1.492 +tasks look very different. And most of the people working, doing
   1.493 +research on computer vision in robots, I think haven't taken all that
   1.494 +on board, so they're trying to get machines to do things which, even
   1.495 +if they were successful, would not make the robots very intelligent
   1.496 +(and in fact, even the ones they're trying to do are not really easy
   1.497 +to do, and they don't succeed very well&mdash; although, there's progress;
   1.498 +I shouldn't disparage it too much.)
   1.499 +</p>
   1.500 +</div>
   1.501 +
   1.502 +</div>
   1.503 +
   1.504 +<div id="outline-container-3-3" class="outline-3">
   1.505 +<h3 id="sec-3-3"><span class="section-number-3">3.3</span> Online and offline intelligence</h3>
   1.506 +<div class="outline-text-3" id="text-3-3">
   1.507 +
   1.508 +
   1.509 +<p>
   1.510 +[14:10] It gets more complex as animals get more sophisticated. So, I
   1.511 +like to make a distinction between online intelligence and offline
   1.512 +intelligence. So, for example, if I want to pick something up &mdash; like
   1.513 +this leaf &lt;he plucks a leaf from the table&gt; &mdash; I was able to select
   1.514 +it from all the others in there, and while moving my hand towards it,
   1.515 +I was able to guide its trajectory, making sure it was going roughly
   1.516 +in the right direction &mdash; as opposed to going out there, which
   1.517 +wouldn't have been able to pick it up &mdash; and these two fingers ended
   1.518 +up with a portion of the leaf between them, so that I was able to tell
   1.519 +when I'm ready to do that &lt;he clamps the leaf between two fingers&gt;
   1.520 +and at that point, I clamped my fingers and then I could pick up the
   1.521 +leaf. 
   1.522 +</p>
   1.523 +<p>
   1.524 +[14:54] Whereas, &mdash; and that's an example of online intelligence:
   1.525 +during the performance of an action (both from the stage where it's
   1.526 +initiated, and during the intermediate stages, and where it's
   1.527 +completed) I'm taking in information relevant to controlling all those
   1.528 +stages, and that relevant information keeps changing. That means I
   1.529 +need stores of transient information which gets discarded almost
   1.530 +immediately and replaced or something. That's online intelligence. And
   1.531 +there are many forms; that's just one example, and Gibson discussed
   1.532 +quite a lot of examples which I won't try to replicate now.
   1.533 +</p>
   1.534 +<p>
   1.535 +[15:30] But in offline intelligence, you're not necessarily actually
   1.536 +<i>performing</i> the actions when you're using your intelligence; you're
   1.537 +thinking about <i>possible</i> actions. So, for instance, I could think
   1.538 +about how fast or by what route I would get back to the lecture room
   1.539 +if I wanted to [get to the next talk] or something. And I know where
   1.540 +the door is, roughly speaking, and I know roughly which route I would
   1.541 +take, when I go out, I should go to the left or to the right, because
   1.542 +I've stored information about where the spaces are, where the
   1.543 +buildings are, where the door was that we came out &mdash; but in using
   1.544 +that information to think about that route, I'm not actually
   1.545 +performing the action. I'm not even <i>simulating</i> it in detail: the
   1.546 +precise details of direction and speed and when to clamp my fingers,
   1.547 +or when to contract my leg muscles when walking, are all irrelevant to
   1.548 +thinking about a good route, or thinking about the potential things
   1.549 +that might happen on the way. Or what would be a good place to meet
   1.550 +someone who I think [for an acquaintance in particular] &mdash; [barber]
   1.551 +or something &mdash; I don't necessarily have to work out exactly <i>where</i>
   1.552 +the person's going to stand, or from what angle I would recognize
   1.553 +them, and so on.
   1.554 +</p>
   1.555 +<p>
   1.556 +[16:46] So, offline intelligence &mdash; which I think became not just a
   1.557 +human competence; I think there are other animals that have aspects of
   1.558 +it: Squirrels are very impressive as you watch them. Gray squirrels at
   1.559 +any rate, as you watch them defeating squirrel-proof birdfeeders, seem
   1.560 +to have a lot of that [offline intelligence], as well as the online
   1.561 +intelligence when they eventually perform the action they've worked
   1.562 +out [] that will get them to the nuts. 
   1.563 +</p>
   1.564 +<p>
   1.565 +[17:16] And I think that what happened during our evolution is that
   1.566 +mechanisms for acquiring and processing and storing and manipulating
   1.567 +information that is more and more remote from the performance of
   1.568 +actions developed. An example is taking in information about where
   1.569 +locations are that you might need to go to infrequently: There's a
   1.570 +store of a particular type of material that's good for building on
   1.571 +roofs of houses or something out around there in some
   1.572 +direction. There's a good place to get water somewhere in another
   1.573 +direction. There are people that you'd like to go and visit in
   1.574 +another place, and so on. 
   1.575 +</p>
   1.576 +<p>
   1.577 +[17:59] So taking in information about an extended environment and
   1.578 +building it into a structure that you can make use of for different
   1.579 +purposes is another example of offline intelligence. And when we do
   1.580 +that, we sometimes use only our brains, but in modern times, we also
   1.581 +learned how to make maps on paper and walls and so on. And it's not
   1.582 +clear whether the stuff inside our heads has the same structures as
   1.583 +the maps we make on paper: the maps on paper have a different
   1.584 +function; they may be used to communicate with others, or meant for
   1.585 +<i>looking</i> at, whereas the stuff in your head you don't <i>look</i> at; you
   1.586 +use it in some other way.
   1.587 +</p>
   1.588 +<p>
   1.589 +[18:46] So, what I'm getting at is that there's a great deal of human
   1.590 +intelligence (and animal intelligence) which is involved in what's
   1.591 +possible in the future, what exists in distant places, what might have
   1.592 +happened in the past (sometimes you need to know why something is as
   1.593 +it is, because that might be relevant to what you should or shouldn't
   1.594 +do in the future, and so on), and I think there was something about
   1.595 +human evolution that extended that offline intelligence way beyond
   1.596 +that of animals. And I don't think it was <i>just</i> human language, (but
   1.597 +human language had something to do with it) but I think there was
   1.598 +something else that came earlier than language which involves the
   1.599 +ability to use your offline intelligence to discover something that
   1.600 +has a rich mathematical structure. 
   1.601 +</p>
   1.602 +</div>
   1.603 +
   1.604 +</div>
   1.605 +
   1.606 +<div id="outline-container-3-4" class="outline-3">
   1.607 +<h3 id="sec-3-4"><a name="example-gap" id="example-gap"></a><span class="section-number-3">3.4</span> Example: Even toddlers use sophisticated geometric knowledge</h3>
   1.608 +<div class="outline-text-3" id="text-3-4">
   1.609 +
   1.610 +<p>[19:44] I'll give you a simple example: if you look through a gap, you
   1.611 +can see something that's on the other side of the gap. Now, you
   1.612 +<i>might</i> see what you want to see, or you might see only part of it. If
   1.613 +you want to see more of it, which way would you move? Well, you could
   1.614 +either move <i>sideways</i>, and see through the gap&mdash;and see it roughly
   1.615 +the same amount but a different part of it [if it's a ????], or you
   1.616 +could move <i>towards</i> the gap and then your view will widen as you
   1.617 +approach the gap. Now, there's a bit of mathematics in there, insofar
   1.618 +as you are implicitly assuming that information travels in straight
   1.619 +lines, and as you go closer to a gap, the straight lines that you can
   1.620 +draw from where you are through the gap, widen as you approach that
   1.621 +gap. Now, there's a kind of theorem of Euclidean geometry in there
   1.622 +which I'm not going to try to state very precisely (and as far as I
   1.623 +know, wasn't stated explicitly in Euclidean geometry) but it's
   1.624 +something every toddler&mdash; human toddler&mdash;learns. (Maybe other
   1.625 +animals also know it, I don't know.) But there are many more things,
   1.626 +actions to perform, to get you more information about things, actions
   1.627 +to perform to conceal information from other people, actions that will
   1.628 +enable you to operate, to act on a rigid object in one place in order
   1.629 +to produce an effect on another place. So, there's a lot of stuff that
   1.630 +involves lines and rotations and angles and speeds and so on that I
   1.631 +think humans (maybe, to a lesser extent, other animals) develop the
   1.632 +ability to think about in a generic way. That means that you could
   1.633 +take out the generalizations from the particular contexts and then
   1.634 +re-use them in a new contexts in ways that I think are not yet
   1.635 +represented at all in AI and in theories of human learning in any []
   1.636 +way &mdash; although some people are trying to study learning of mathematics.
   1.637 +</p>
   1.638 +</div>
   1.639 +</div>
   1.640 +
   1.641 +</div>
   1.642 +
   1.643 +<div id="outline-container-4" class="outline-2">
   1.644 +<h2 id="sec-4"><span class="section-number-2">4</span> Animal intelligence</h2>
   1.645 +<div class="outline-text-2" id="text-4">
   1.646 +
   1.647 +
   1.648 +
   1.649 +</div>
   1.650 +
   1.651 +<div id="outline-container-4-1" class="outline-3">
   1.652 +<h3 id="sec-4-1"><span class="section-number-3">4.1</span> The priority is <i>cataloguing</i> what competences have evolved, not ranking them.</h3>
   1.653 +<div class="outline-text-3" id="text-4-1">
   1.654 +
   1.655 +<p>[22:03] I wasn't going to challenge the claim that humans can do more
   1.656 +sophisticated forms of [tracking], just to mention that there are some
   1.657 +things that other animals can do which are in some ways comparable,
   1.658 +and some ways superior to [things] that humans can do. In particular,
   1.659 +there are species of birds and also, I think, some rodents ---
   1.660 +squirrels, or something &mdash; I don't know enough about the variety ---
   1.661 +that can hide nuts and remember where they've hidden them, and go back
   1.662 +to them. And there have been tests which show that some birds are able
   1.663 +to hide tens &mdash; you know, [eighteen] or something nuts &mdash; and to
   1.664 +remember which ones have been taken, which ones haven't, and so
   1.665 +on. And I suspect most humans can't do that. I wouldn't want to say
   1.666 +categorically that maybe we couldn't, because humans are very
   1.667 +[varied], and also [a few] people can develop particular competences
   1.668 +through training. But it's certainly not something I can do.
   1.669 +</p>
   1.670 +
   1.671 +</div>
   1.672 +
   1.673 +</div>
   1.674 +
   1.675 +<div id="outline-container-4-2" class="outline-3">
   1.676 +<h3 id="sec-4-2"><span class="section-number-3">4.2</span> AI can be used to test philosophical theories</h3>
   1.677 +<div class="outline-text-3" id="text-4-2">
   1.678 +
   1.679 +<p>[23:01] But I also would like to say that I am not myself particularly
   1.680 +interested in trying to align animal intelligences according to any
   1.681 +kind of scale of superiority; I'm just trying to understand what it
   1.682 +was that biological evolution produced, and how it works, and I'm
   1.683 +interested in AI <i>mainly</i> because I think that when one comes up with
   1.684 +theories about how these things work, one needs to have some way of
   1.685 +testing the theory. And AI provides ways of implementing and testing
   1.686 +theories that were not previously available: Immanuel Kant was trying
   1.687 +to come up with theories about how minds work, but he didn't have any
   1.688 +kind of a mechanism that he could build to test his theory about the
   1.689 +nature of mathematical knowledge, for instance, or how concepts were
   1.690 +developed from babyhood onward. Whereas now, if we do develop a
   1.691 +theory, we have a criterion of adequacy, namely it should be precise
   1.692 +enough and rich enough and detailed to enable a model to be
   1.693 +built. And then we can see if it works. 
   1.694 +</p>
   1.695 +<p>
   1.696 +[24:07] If it works, it doesn't mean we've proved that the theory is
   1.697 +correct; it just shows it's a candidate. And if it doesn't work, then
   1.698 +it's not a candidate as it stands; it would need to be modified in
   1.699 +some way.
   1.700 +</p>
   1.701 +</div>
   1.702 +</div>
   1.703 +
   1.704 +</div>
   1.705 +
   1.706 +<div id="outline-container-5" class="outline-2">
   1.707 +<h2 id="sec-5"><span class="section-number-2">5</span> Is abstract general intelligence feasible?</h2>
   1.708 +<div class="outline-text-2" id="text-5">
   1.709 +
   1.710 +
   1.711 +
   1.712 +</div>
   1.713 +
   1.714 +<div id="outline-container-5-1" class="outline-3">
   1.715 +<h3 id="sec-5-1"><span class="section-number-3">5.1</span> It's misleading to compare the brain and its neurons to a computer made of transistors</h3>
   1.716 +<div class="outline-text-3" id="text-5-1">
   1.717 +
   1.718 +<p>[24:27] I think there's a lot of optimism based on false clues:
   1.719 +the&hellip;for example, one of the false clues is to count the number of
   1.720 +neurons in the brain, and then talk about the number of transistors
   1.721 +you can fit into a computer or something, and then compare them. It
   1.722 +might turn out that the study of the way synapses work (which leads
   1.723 +some people to say that a typical synapse [] in the human brain has
   1.724 +computational power comparable to the Internet a few years ago,
   1.725 +because of the number of different molecules that are doing things,
   1.726 +the variety of types of things that are being done in those molecular
   1.727 +interactions, and the speed at which they happen, if you somehow count
   1.728 +up the number of operations per second or something, then you get
   1.729 +these comparable figures).
   1.730 +</p>
   1.731 +</div>
   1.732 +
   1.733 +</div>
   1.734 +
   1.735 +<div id="outline-container-5-2" class="outline-3">
   1.736 +<h3 id="sec-5-2"><span class="section-number-3">5.2</span> For example, brains may rely heavily on chemical information processing</h3>
   1.737 +<div class="outline-text-3" id="text-5-2">
   1.738 +
   1.739 +<p>Now even if the details aren't right, there may just be a lot of
   1.740 +information processing that&hellip;going on in brains at the <i>molecular</i>
   1.741 +level, not the neural level. Then, if that's the case, the processing
   1.742 +units will be orders of magnitude larger in number than the number of
   1.743 +neurons. And it's certainly the case that all the original biological
   1.744 +forms of information processing were chemical; there weren't brains
   1.745 +around, and still aren't in most microbes. And even when humans grow
   1.746 +their brains, the process of starting from a fertilized egg and
   1.747 +producing this rich and complex structure is, for much of the time,
   1.748 +under the control of chemical computations, chemical information
   1.749 +processing&mdash;of course combined with physical sorts of materials and
   1.750 +energy and so on as well.
   1.751 +</p>
   1.752 +<p>
   1.753 +[26:25] So it would seem very strange if all that capability was
   1.754 +something thrown away when you've got a brain and all the information
   1.755 +processing, the [challenges that were handled in making a brain],
   1.756 +&hellip; This is handwaving on my part; I'm just saying that we <i>might</i>
   1.757 +learn that what brains do is not what we think they do, and that
   1.758 +problems of replicating them are not what we think they are, solely in
   1.759 +terms of numerical estimate of time scales, the number of components,
   1.760 +and so on.
   1.761 +</p>
   1.762 +</div>
   1.763 +
   1.764 +</div>
   1.765 +
   1.766 +<div id="outline-container-5-3" class="outline-3">
   1.767 +<h3 id="sec-5-3"><span class="section-number-3">5.3</span> Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations</h3>
   1.768 +<div class="outline-text-3" id="text-5-3">
   1.769 +
   1.770 +<p>[26:56] But apart from that, the other basis of skepticism concerns
   1.771 +how well we understand what the problems are. I think there are many
   1.772 +people who try to formalize the problems of designing an intelligent
   1.773 +system in terms of streams of information thought of as bit streams or
   1.774 +collections of bit streams, and they think of as the problems of
   1.775 +intelligence as being the construction or detection of patterns in
   1.776 +those, and perhaps not just detection of patterns, but detection of
   1.777 +patterns that are useable for sending <i>out</i> streams to control motors
   1.778 +and so on in order to []. And that way of conceptualizing the problem
   1.779 +may lead on the one hand to oversimplification, so that the things
   1.780 +that <i>would</i> be achieved, if those goals were achieved, maybe much
   1.781 +simpler, in some ways inadequate. Or the replication of human
   1.782 +intelligence, or the matching of human intelligence&mdash;or for that
   1.783 +matter, squirrel intelligence&mdash;but in another way, it may also make
   1.784 +the problem harder: it may be that some of the kinds of things that
   1.785 +biological evolution has achieved can't be done that way. And one of
   1.786 +the ways that might turn out to be the case is not because it's not
   1.787 +impossible in principle to do some of the information processing on
   1.788 +artificial computers-based-on-transistors and other bit-manipulating
   1.789 +[]&mdash;but it may just be that the computational complexity of solving
   1.790 +problems, processes, or finding solutions to complex problems, are
   1.791 +much greater and therefore you might need a much larger universe than
   1.792 +we have available in order to do things.
   1.793 +</p>
   1.794 +</div>
   1.795 +
   1.796 +</div>
   1.797 +
   1.798 +<div id="outline-container-5-4" class="outline-3">
   1.799 +<h3 id="sec-5-4"><span class="section-number-3">5.4</span> Example: find the shortest path by dangling strings</h3>
   1.800 +<div class="outline-text-3" id="text-5-4">
   1.801 +
   1.802 +<p>[28:55] Then if the underlying mechanisms were different, the
   1.803 +information processing mechanisms, they might be better tailored to
   1.804 +particular sorts of computation. There's a [] example, which is
   1.805 +finding the shortest route if you've got a collection of roads, and
   1.806 +they may be curved roads, and lots of tangled routes from A to B to C,
   1.807 +and so on. And if you start at A and you want to get to Z &mdash; a place
   1.808 +somewhere on that map &mdash; the process of finding the shortest route
   1.809 +will involve searching through all these different possibilities and
   1.810 +rejecting some that are longer than others and so on. But if you make
   1.811 +a model of that map out of string, where these strings are all laid
   1.812 +out on the maps and so have the lengths of the routes. Then if you
   1.813 +hold the two knots in the string &ndash; it's a network of string &mdash; which
   1.814 +correspond to the start point and end point, then <i>pull</i>, then the
   1.815 +bits of string that you're left with in a straight line will give you
   1.816 +the shortest route, and that process of pulling just gets you the
   1.817 +solution very rapidly in a parallel computation, where all the others
   1.818 +just hang by the wayside, so to speak.
   1.819 +</p>
   1.820 +</div>
   1.821 +
   1.822 +</div>
   1.823 +
   1.824 +<div id="outline-container-5-5" class="outline-3">
   1.825 +<h3 id="sec-5-5"><span class="section-number-3">5.5</span> In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.</h3>
   1.826 +<div class="outline-text-3" id="text-5-5">
   1.827 +
   1.828 +<p>[30:15] Now, I'm not saying brains can build networks of string and
   1.829 +pull them or anything like that; that's just an illustration of how if
   1.830 +you have the right representation, correctly implemented&mdash;or suitably
   1.831 +implemented&mdash;for a problem, then you can avoid very combinatorially
   1.832 +complex searches, which will maybe grow exponentially with the number
   1.833 +of components in your map, whereas with this thing, the time it takes
   1.834 +won't depend on how many strings you've [got on the map]; you just
   1.835 +pull, and it will depend only on the shortest route that exists in
   1.836 +there. Even if that shortest route wasn't obvious on the original map.
   1.837 +</p>
   1.838 +
   1.839 +<p>
   1.840 +[30:59] So that's a rather long-winded way of formulating the
   1.841 +conjecture which&mdash;of supporting, a roundabout way of supporting the
   1.842 +conjecture that there may be something about the way molecules perform
   1.843 +computations where they have the combination of continuous change as
   1.844 +things move through space and come together and move apart, and
   1.845 +whatever &mdash; and also snap into states that then persist, so [as you
   1.846 +learn from] quantum mechanics, you can have stable molecular
   1.847 +structures which are quite hard to separate, and then in catalytic
   1.848 +processes you can separate them, or extreme temperatures, or strong
   1.849 +forces, but they may nevertheless be able to move very rapidly in some
   1.850 +conditions in order to perform computations.
   1.851 +</p>
   1.852 +<p>
   1.853 +[31:49] Now there may be things about that kind of structure that
   1.854 +enable searching for solutions to <i>certain</i> classes of problems to be
   1.855 +done much more efficiently (by brain) than anything we could do with
   1.856 +computers. It's just an open question.
   1.857 +</p>
   1.858 +<p>
   1.859 +[32:04] So it <i>might</i> turn out that we need new kinds of technology
   1.860 +that aren't on the horizon in order to replicate the functions that
   1.861 +animal brains perform &mdash;or, it might not. I just don't know. I'm not
   1.862 +claiming that there's strong evidence for that; I'm just saying that
   1.863 +it might turn out that way, partly because I think we know less than
   1.864 +many people think we know about what biological evolution achieved.
   1.865 +</p>
   1.866 +<p>
   1.867 +[32:28] There are some other possibilities: we may just find out that
   1.868 +there are shortcuts no one ever thought of, and it will all happen
   1.869 +much more quickly&mdash;I have an open mind; I'd be surprised, but it
   1.870 +could turn up. There <i>is</i> something that worries me much more than the
   1.871 +singularity that most people talk about, which is machines achieving
   1.872 +human-level intelligence and perhaps taking over [the] planet or
   1.873 +something. There's what I call the <i>singularity of cognitive catch-up</i> &hellip;
   1.874 +</p>
   1.875 +</div>
   1.876 +</div>
   1.877 +
   1.878 +</div>
   1.879 +
   1.880 +<div id="outline-container-6" class="outline-2">
   1.881 +<h2 id="sec-6"><span class="section-number-2">6</span> A singularity of cognitive catch-up</h2>
   1.882 +<div class="outline-text-2" id="text-6">
   1.883 +
   1.884 +
   1.885 +
   1.886 +</div>
   1.887 +
   1.888 +<div id="outline-container-6-1" class="outline-3">
   1.889 +<h3 id="sec-6-1"><span class="section-number-3">6.1</span> What if it will take a lifetime to learn enough to make something new?</h3>
   1.890 +<div class="outline-text-3" id="text-6-1">
   1.891 +
   1.892 +<p>&hellip; SCC, singularity of cognitive catch-up, which I think we're close
   1.893 +to, or maybe have already reached&mdash;I'll explain what I mean by
   1.894 +that. One of the products of biological evolution&mdash;and this is one of
   1.895 +the answers to your earlier questions which I didn't get on to&mdash;is
   1.896 +that humans have not only the ability to make discoveries that none of
   1.897 +their ancestors have ever made, but to shorten the time required for
   1.898 +similar achievements to be reached by their offspring and their
   1.899 +descendants. So once we, for instance, worked out ways of complex
   1.900 +computations, or ways of building houses, or ways of finding our way
   1.901 +around, we don't need&hellip;our children don't need to work it out for
   1.902 +themselves by the same lengthy trial and error procedure; we can help
   1.903 +them get there much faster.
   1.904 +</p>
   1.905 +<p>
   1.906 +Okay, well, what I've been referring to as the singularity of
   1.907 +cognitive catch-up depends on the fact that&mdash;fairly obvious, and it's
   1.908 +often been commented on&mdash;that in case of humans, it's not necessary
   1.909 +for each generation to learn what previous generations learned <i>in the same way</i>. And we can speed up learning once something has been
   1.910 +learned, [it is able to] be learned by new people. And that has meant
   1.911 +that the social processes that support that kind of education of the
   1.912 +young can enormously accelerate what would have taken&hellip;perhaps
   1.913 +thousands [or] millions of years for evolution to produce, can happen in
   1.914 +a much shorter time. 
   1.915 +</p>
   1.916 +
   1.917 +<p>
   1.918 +[34:54] But here's the catch: in order for a new advance to happen ---
   1.919 +so for something new to be discovered that wasn't there before, like
   1.920 +Newtonian mechanics, or the theory of relativity, or Beethoven's music
   1.921 +or [style] or whatever &mdash; the individuals have to have traversed a
   1.922 +significant amount of what their ancestors have learned, even if they
   1.923 +do it much faster than their ancestors, to get to the point where they
   1.924 +can see the gaps, the possibilities for going further than their
   1.925 +ancestors, or their parents or whatever, have done.
   1.926 +</p>
   1.927 +<p>
   1.928 +[35:27] Now in the case of knowledge of science, mathematics,
   1.929 +philosophy, engineering and so on, there's been a lot of accumulated
   1.930 +knowledge. And humans are living a <i>bit</i> longer than they used to, but
   1.931 +they're still living for [whatever it is], a hundred years, or for
   1.932 +most people, less than that. So you can imagine that there might come
   1.933 +a time when in a normal human lifespan, it's not possible for anyone
   1.934 +to learn enough to understand the scope and limits of what's already
   1.935 +been achieved in order to see the potential for going beyond it and to
   1.936 +build on what's already been done to make that&hellip;those future steps.
   1.937 +</p>
   1.938 +<p>
   1.939 +[36:10] So if we reach that stage, we will have reached the
   1.940 +singularity of cognitive catch-up because the process of education
   1.941 +that enables individuals to learn faster than their ancestors did is
   1.942 +the catching-up process, and it may just be that we at some point
   1.943 +reach a point where catching up can only happen within a lifetime of
   1.944 +an individual, and after that they're dead and they can't go
   1.945 +beyond. And I have some evidence that there's a lot of that around
   1.946 +because I see a lot of people coming up with what <i>they</i> think of as
   1.947 +new ideas which they've struggled to come up with, but actually they
   1.948 +just haven't taken in some of what was&hellip;some of what was done [] by
   1.949 +other people, in other places before them. And I think that despite
   1.950 +the availability of search engines which make it <i>easier</i> for people
   1.951 +to get the information&mdash;for instance, when I was a student, if I
   1.952 +wanted to find out what other people had done in the field, it was a
   1.953 +laborious process&mdash;going to the library, getting books, and
   1.954 +&mdash;whereas now, I can often do things in seconds that would have taken
   1.955 +hours. So that means that if seconds [are needed] for that kind of
   1.956 +work, my lifespan has been extended by a factor of ten or
   1.957 +something. So maybe that <i>delays</i> the singularity, but it may not
   1.958 +delay it enough. But that's an open question; I don't know. And it may
   1.959 +just be that in some areas, this is more of a problem than others. For
   1.960 +instance, it may be that in some kinds of engineering, we're handing
   1.961 +over more and more of the work to machines anyways and they can go on
   1.962 +doing it.  So for instance, most of the production of computers now is
   1.963 +done by a computer-controlled machine&mdash;although some of the design
   1.964 +work is done by humans&mdash; a lot of <i>detail</i> of the design is done by
   1.965 +computers, and they produce the next generation, which then produces
   1.966 +the next generation, and so on.
   1.967 +</p>
   1.968 +<p>
   1.969 +[37:57] I don't know if humans can go on having major advances, so
   1.970 +it'll be kind of sad if we can't.
   1.971 +</p>
   1.972 +</div>
   1.973 +</div>
   1.974 +
   1.975 +</div>
   1.976 +
   1.977 +<div id="outline-container-7" class="outline-2">
   1.978 +<h2 id="sec-7"><span class="section-number-2">7</span> Spatial reasoning: a difficult problem</h2>
   1.979 +<div class="outline-text-2" id="text-7">
   1.980 +
   1.981 +
   1.982 +<p>
   1.983 +[38:15] Okay, well, there are different problems [ ] mathematics, and
   1.984 +they have to do with properties. So for instance a lot of mathematics
   1.985 +that can be expressed in terms of logical structures or algebraic
   1.986 +structures and those are pretty well suited for manipulation and&hellip;on
   1.987 +computers, and if a problem can be specified using the
   1.988 +logical/algebraic notation, and the solution method requires creating
   1.989 +something in that sort of notation, then computers are pretty good,
   1.990 +and there are lots of mathematical tools around&mdash;there are theorem
   1.991 +provers and theorem checkers, and all kinds of things, which couldn't
   1.992 +have existed fifty, sixty years ago, and they will continue getting
   1.993 +better.
   1.994 +</p>
   1.995 +
   1.996 +<p>
   1.997 +But there was something that I was <a href="#sec-3-4">alluding to earlier</a> when I gave the
   1.998 +example of how you can reason about what you will see by changing your
   1.999 +position in relation to a door, where what you are doing is using your
  1.1000 +grasp of spatial structures and how as one spatial relationship
  1.1001 +changes namely you come closer to the door or move sideways and
  1.1002 +parallel to the wall or whatever, other spatial relationships change
  1.1003 +in parallel, so the lines from your eyes through to other parts of
  1.1004 +the&hellip;parts of the room on the other side of the doorway change,
  1.1005 +spread out more as you go towards  the doorway, and as you move
  1.1006 +sideways, they don't spread out differently, but focus on different
  1.1007 +parts of the internal &hellip; that they access different parts of the
  1.1008 +&hellip; of the room.
  1.1009 +</p>
  1.1010 +<p>
  1.1011 +Now, those are examples of ways of thinking about relationships and
  1.1012 +changing relationships which are not the same as thinking about what
  1.1013 +happens if I replace this symbol with that symbol, or if I substitute
  1.1014 +this expression in that expression in a logical formula.  And at the
  1.1015 +moment, I do not believe that there is anything in AI amongst the
  1.1016 +mathematical reasoning community, the theorem-proving community, that
  1.1017 +can model the processes that go on when a young child starts learning
  1.1018 +to do Euclidean geometry and is taught things about&mdash;for instance, I
  1.1019 +can give you a proof that the angles of any triangle add up to a
  1.1020 +straight line, 180 degrees. 
  1.1021 +</p>
  1.1022 +
  1.1023 +</div>
  1.1024 +
  1.1025 +<div id="outline-container-7-1" class="outline-3">
  1.1026 +<h3 id="sec-7-1"><span class="section-number-3">7.1</span> Example: Spatial proof that the angles of any triangle add up to a half-circle</h3>
  1.1027 +<div class="outline-text-3" id="text-7-1">
  1.1028 +
  1.1029 +<p>There are standard proofs which involves starting with one triangle,
  1.1030 +then adding a line parallel to the base one of my former students,
  1.1031 +Mary Pardoe, came up with which I will demonstrate with this &lt;he holds
  1.1032 +up a pen&gt; &mdash; can you see it? If I have a triangle here that's got
  1.1033 +three sides, if I put this thing on it, on one side &mdash; let's say the
  1.1034 +bottom&mdash;I can rotate it until it lies along the second&hellip;another
  1.1035 +side, and then maybe move it up to the other end ~. Then I can rotate
  1.1036 +it again, until it lies on the third side, and move it back to the
  1.1037 +other end. And then I'll rotate it again and it'll eventually end up
  1.1038 +on the original side, but it will have changed the direction it's
  1.1039 +pointing in &mdash; and it won't have crossed over itself so it will have
  1.1040 +gone through a half-circle, and that says that the three angles of a
  1.1041 +triangle add up to the rotations of half a circle, which is a
  1.1042 +beautiful kind of proof and almost anyone can understand it. Some
  1.1043 +mathematicians don't like it, because they say it hides some of the
  1.1044 +assumptions, but nevertheless, as far as I'm concerned, it's an
  1.1045 +example of a human ability to do reasoning which, once you've
  1.1046 +understood it, you can see will apply to any triangle &mdash; it's got to
  1.1047 +be a planar triangle &mdash; not a triangle on a globe, because then the
  1.1048 +angles can add up to more than &hellip; you can have three <i>right</i> angles
  1.1049 +if you have an equator&hellip;a line on the equator, and a line going up to
  1.1050 +to the north pole of the earth, and then you have a right angle and
  1.1051 +then another line going down to the equator, and you have a right
  1.1052 +angle, right angle, right angle, and they add up to more than a
  1.1053 +straight line. But that's because the triangle isn't in the plane,
  1.1054 +it's on a curved surface. In fact, that's one of the
  1.1055 +differences&hellip;definitional differences you can take between planar and
  1.1056 +curved surfaces: how much the angles of a triangle add up to. But our
  1.1057 +ability to <i>visualize</i> and notice the generality in that process, and
  1.1058 +see that you're going to be able to do the same thing using triangles
  1.1059 +that stretch in all sorts of ways, or if it's a million times as
  1.1060 +large, or if it's made&hellip;you know, written on, on&hellip;if it's drawn in
  1.1061 +different colors or whatever &mdash; none of that's going to make any
  1.1062 +difference to the essence of that process. And that ability to see
  1.1063 +the commonality in a spatial structure which enables you to draw some
  1.1064 +conclusions with complete certainty&mdash;subject to the possibility that
  1.1065 +sometimes you make mistakes, but when you make mistakes, you can
  1.1066 +discover them, as has happened in the history of geometrical theorem
  1.1067 +proving. Imre Lakatos had a wonderful book called <a href="http://en.wikipedia.org/wiki/Proofs_and_Refutations"><i>Proofs and Refutations</i></a> &mdash; which I won't try to summarize &mdash; but he has
  1.1068 +examples: mistakes were made; that was because people didn't always
  1.1069 +realize there were subtle subcases which had slightly different
  1.1070 +properties, and they didn't take account of that. But once they're
  1.1071 +noticed, you rectify that. 
  1.1072 +</p>
  1.1073 +</div>
  1.1074 +
  1.1075 +</div>
  1.1076 +
  1.1077 +<div id="outline-container-7-2" class="outline-3">
  1.1078 +<h3 id="sec-7-2"><span class="section-number-3">7.2</span> Geometric results are fundamentally different than experimental results in chemistry or physics.</h3>
  1.1079 +<div class="outline-text-3" id="text-7-2">
  1.1080 +
  1.1081 +<p>[43:28] But it's not the same as doing experiments in chemistry and
  1.1082 +physics, where you can't be sure it'll be the same on [] or at a high
  1.1083 +temperature, or in a very strong magnetic field &mdash; with geometric
  1.1084 +reasoning, in some sense you've got the full information in front of
  1.1085 +you; even if you don't always notice an important part of it. So, that
  1.1086 +kind of reasoning (as far as I know) is not implemented anywhere in a
  1.1087 +computer. And most people who do research on trying to model
  1.1088 +mathematical reasoning, don't pay any attention to that, because of
  1.1089 +&hellip; they just don't think about it. They start from somewhere else,
  1.1090 +maybe because of how they were educated. I was taught Euclidean
  1.1091 +geometry at school. Were you?
  1.1092 +</p>
  1.1093 +<p>
  1.1094 +(Adam ford: Yeah)
  1.1095 +</p>
  1.1096 +<p>
  1.1097 +Many people are not now. Instead they're taught set theory, and
  1.1098 +logic, and arithmetic, and [algebra], and so on. And so they don't use
  1.1099 +that bit of their brains, without which we wouldn't have built any of
  1.1100 +the cathedrals, and all sorts of things we now depend on.
  1.1101 +</p>
  1.1102 +</div>
  1.1103 +</div>
  1.1104 +
  1.1105 +</div>
  1.1106 +
  1.1107 +<div id="outline-container-8" class="outline-2">
  1.1108 +<h2 id="sec-8"><span class="section-number-2">8</span> Is near-term artificial general intelligence likely?</h2>
  1.1109 +<div class="outline-text-2" id="text-8">
  1.1110 +
  1.1111 +
  1.1112 +
  1.1113 +</div>
  1.1114 +
  1.1115 +<div id="outline-container-8-1" class="outline-3">
  1.1116 +<h3 id="sec-8-1"><span class="section-number-3">8.1</span> Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.</h3>
  1.1117 +<div class="outline-text-3" id="text-8-1">
  1.1118 +
  1.1119 +
  1.1120 +<p>
  1.1121 +[44:35] Well, this relates to what's meant by general. And when I
  1.1122 +first encountered the AGI community, I thought that what they all
  1.1123 +meant by general intelligence was <i>uniform</i> intelligence ---
  1.1124 +intelligence based on some common simple (maybe not so simple, but)
  1.1125 +single powerful mechanism or principle of inference. And there are
  1.1126 +some people in the community who are trying to produce things like
  1.1127 +that, often in connection with algorithmic information theory and
  1.1128 +computability of information, and so on. But there's another sense of
  1.1129 +general which means that the system of general intelligence can do
  1.1130 +lots of different things, like perceive things, understand language,
  1.1131 +move around, make things, and so on &mdash; perhaps even enjoy a joke;
  1.1132 +that's something that's not nearly on the horizon, as far as I
  1.1133 +know. Enjoying a joke isn't the same as being able to make laughing
  1.1134 +noises. 
  1.1135 +</p>
  1.1136 +<p>
  1.1137 +Given, then, that there are these two notions of general
  1.1138 +intelligence&mdash;there's one that looks for one uniform, possibly
  1.1139 +simple, mechanism or collection of ideas and notations and algorithms,
  1.1140 +that will deal with any problem that's solvable &mdash; and the other
  1.1141 +that's general in the sense that it can do lots of different things
  1.1142 +that are combined into an integrated architecture (which raises lots
  1.1143 +of questions about how you combine these things and make them work
  1.1144 +together) and we humans, certainly, are of the second kind: we do all
  1.1145 +sorts of different things, and other animals also seem to be of the
  1.1146 +second kind, perhaps not as general as humans. Now, it may turn out
  1.1147 +that in some near future time, who knows&mdash;decades, a few
  1.1148 +decades&mdash;you'll be able to get machines that are capable of solving
  1.1149 +in a time that will depend on the nature of the problem, but any
  1.1150 +problem that is solvable, and they will be able to do it in some sort
  1.1151 +of tractable time &mdash; of course, there are some problems that are
  1.1152 +solvable that would require a larger universe and a longer history
  1.1153 +than the history of the universe, but apart from that constraint,
  1.1154 +these machines will be able to do anything [].  But to be able to do
  1.1155 +some of the kinds of things that humans can do, like the kinds of
  1.1156 +geometrical reasoning where you look at the shape and you abstract
  1.1157 +away from the precise angles and sizes and shapes and so on, and
  1.1158 +realize there's something general here, as must have happened when our
  1.1159 +ancestors first made the discoveries that eventually put together in
  1.1160 +Euclidean geometry. 
  1.1161 +</p>
  1.1162 +<p>
  1.1163 +It may be that that requires mechanisms of a kind that we don't know
  1.1164 +anything about at the moment. Maybe brains are using molecules and
  1.1165 +rearranging molecules in some way that supports that kind of
  1.1166 +reasoning. I'm not saying they are &mdash; I don't know, I just don't see
  1.1167 +any simple&hellip;any obvious way to map that kind of reasoning capability
  1.1168 +onto what we currently do on computers. There is&mdash;and I just
  1.1169 +mentioned this briefly beforehand&mdash;there is a kind of thing that's
  1.1170 +sometimes thought of as a major step in that direction, namely you can
  1.1171 +build a machine (or a software system) that can represent some
  1.1172 +geometrical structure, and then be told about some change that's going
  1.1173 +to happen to it, and it can predict in great detail what'll
  1.1174 +happen. And this happens for instance in game engines, where you say
  1.1175 +we have all these blocks on the table and I'll drop one other block,
  1.1176 +and then [the thing] uses Newton's laws and properties of rigidity of
  1.1177 +the parts and the elasticity and also stuff about geometries and space
  1.1178 +and so on, to give you a very accurate representation of what'll
  1.1179 +happen when this brick lands on this pile of things, [it'll bounce and
  1.1180 +go off, and so on]. And you just, with more memory and more CPU power,
  1.1181 +you can increase the accuracy&mdash; but that's totally different than
  1.1182 +looking at <i>one</i> example, and working out what will happen in a whole
  1.1183 +<i>range</i> of cases at a higher level of abstraction, whereas the game
  1.1184 +engine does it in great detail for <i>just</i> this case, with <i>just</i> those
  1.1185 +precise things, and it won't even know what the generalizations are
  1.1186 +that it's using that would apply to others []. So, in that sense, [we]
  1.1187 +may get AGI &mdash; artificial general intelligence &mdash; pretty soon, but
  1.1188 +it'll be limited in what it can do. And the other kind of general
  1.1189 +intelligence which combines all sorts of different things, including
  1.1190 +human spatial geometrical reasoning, and maybe other things, like the
  1.1191 +ability to find things funny, and to appreciate artistic features and
  1.1192 +other things may need forms of pattern-mechanism, and I have an open
  1.1193 +mind about that.
  1.1194 +</p>
  1.1195 +</div>
  1.1196 +</div>
  1.1197 +
  1.1198 +</div>
  1.1199 +
  1.1200 +<div id="outline-container-9" class="outline-2">
  1.1201 +<h2 id="sec-9"><span class="section-number-2">9</span> Abstract General Intelligence impacts</h2>
  1.1202 +<div class="outline-text-2" id="text-9">
  1.1203 +
  1.1204 +
  1.1205 +<p>
  1.1206 +[49:53] Well, as far as the first type's concerned, it could be useful
  1.1207 +for all kinds of applications &mdash; there are people who worry about
  1.1208 +where there's a system that has that type of intelligence, might in
  1.1209 +some sense take over control of the planet. Well, humans often do
  1.1210 +stupid things, and they might do something stupid that would lead to
  1.1211 +disaster, but I think it's more likely that there would be other
  1.1212 +things [] lead to disaster&mdash; population problems, using up all the
  1.1213 +resources, destroying ecosystems, and whatever. But certainly it would
  1.1214 +go on being useful to have these calculating devices. Now, as for the
  1.1215 +second kind of them, I don't know&mdash;if we succeeded at putting
  1.1216 +together all the parts that we find in humans, we might just make an
  1.1217 +artificial human, and then we might have some of them as your friends,
  1.1218 +and some of them we might not like, and some of them might become
  1.1219 +teachers or whatever, composers &mdash; but that raises a question: could
  1.1220 +they, in some sense, be superior to us, in their learning
  1.1221 +capabilities, their understanding of human nature, or maybe their
  1.1222 +wickedness or whatever &mdash; these are all issues in which I expect the
  1.1223 +best science fiction writers would give better answers than anything I
  1.1224 +could do, but I did once fantasize when I [back] in 1978, that perhaps
  1.1225 +if we achieved that kind of thing, that they would be wise, and gentle
  1.1226 +and kind, and realize that humans are an inferior species that, you
  1.1227 +know, have some good features, so they'd keep us in some kind of
  1.1228 +secluded&hellip;restrictive kind of environment, keep us away from
  1.1229 +dangerous weapons, and so on. And find ways of cohabitating with
  1.1230 +us. But that's just fantasy.
  1.1231 +</p>
  1.1232 +<p>
  1.1233 +Adam Ford: Awesome. Yeah, there's an interesting story <i>With Folded Hands</i> where [the computers] want to take care of us and want to
  1.1234 +reduce suffering and end up lobotomizing everybody [but] keeping them
  1.1235 +alive so as to reduce the suffering. 
  1.1236 +</p>
  1.1237 +<p>
  1.1238 +Aaron Sloman: Not all that different from <i>Brave New World</i>, where it
  1.1239 +was done with drugs and so on, but different humans are given
  1.1240 +different roles in that system, yeah.
  1.1241 +</p>
  1.1242 +<p>
  1.1243 +There's also <i>The Time Machine</i>, H.G. Wells, where the &hellip; in the
  1.1244 +distant future, humans have split in two: the Eloi, I think they were
  1.1245 +called, they lived underground, they were the [] ones, and then&mdash;no,
  1.1246 +the Morlocks lived underground; Eloi lived on the planet; they were
  1.1247 +pleasant and pretty but not very bright, and so on, and they were fed
  1.1248 +on by &hellip;
  1.1249 +</p>
  1.1250 +<p>
  1.1251 +Adam Ford: [] in the future.
  1.1252 +</p>
  1.1253 +<p>
  1.1254 +Aaron Sloman: As I was saying, if you ask science fiction writers,
  1.1255 +you'll probably come up with a wide variety of interesting answers. 
  1.1256 +</p>
  1.1257 +<p>
  1.1258 +Adam Ford: I certainly have; I've spoken to [] of Birmingham, and
  1.1259 +Sean Williams, &hellip; who else? 
  1.1260 +</p>
  1.1261 +<p>
  1.1262 +Aaron Sloman: Did you ever read a story by E.M. Forrester called <i>The Machine Stops</i> &mdash; very short story, it's <a href="http://archive.ncsa.illinois.edu/prajlich/forster.html">on the Internet somewhere</a>
  1.1263 +&mdash; it's about a time when people sitting &hellip; and this was written in
  1.1264 +about [1914 ] so it's about&hellip;over a hundred years ago &hellip; people are
  1.1265 +in their rooms, they sit in front of screens, and they type things,
  1.1266 +and they communicate with one another that way, and they don't meet;
  1.1267 +they have debates, and they give lectures to their audiences that way,
  1.1268 +and then there's a woman whose son says &ldquo;I'd like to see
  1.1269 +you&rdquo; and she says &ldquo;What's the point? You've got me at
  1.1270 +this point &rdquo; but he wants to come and talk to her &mdash; I won't
  1.1271 +tell you how it ends, but.
  1.1272 +</p>
  1.1273 +<p>
  1.1274 +Adam Ford: Reminds me of the Internet.
  1.1275 +</p>
  1.1276 +<p>
  1.1277 +Aaron Sloman: Well, yes; he invented &hellip; it was just extraordinary
  1.1278 +that he was able to do that, before most of the components that we
  1.1279 +need for it existed.
  1.1280 +</p>
  1.1281 +<p>
  1.1282 +Adam Ford: [Another person who did that] was Vernor Vinge [] <i>True Names</i>. 
  1.1283 +</p>
  1.1284 +<p>
  1.1285 +Aaron Sloman: When was that written?
  1.1286 +</p>
  1.1287 +<p>
  1.1288 +Adam Ford: The seventies.
  1.1289 +</p>
  1.1290 +<p>
  1.1291 +Aaron Sloman: Okay, well a lot of the technology was already around
  1.1292 +then. The original bits of internet were working, in about 1973, I was
  1.1293 +sitting &hellip; 1974, I was sitting at Sussex University trying to
  1.1294 +use&hellip;learn LOGO, the programming language, to decide whether it was
  1.1295 +going to be useful for teaching AI, and I was sitting [] paper
  1.1296 +teletype, there was paper coming out, transmitting ten characters a
  1.1297 +second from Sussex to UCL computer lab by telegraph cable, from there
  1.1298 +to somewhere in Norway via another cable, from there by satellite to
  1.1299 +California to a computer Xerox [] research center where they had
  1.1300 +implemented a computer with a LOGO system on it, with someone I had
  1.1301 +met previously in Edinburgh, Danny Bobrow, and he allowed me to have
  1.1302 +access to this sytem. So there I was typing. And furthermore, it was
  1.1303 +duplex typing, so every character I typed didn't show up on my
  1.1304 +terminal until it had gone all the way there and echoed back, so I
  1.1305 +would type, and the characters would come back four seconds later.
  1.1306 +</p>
  1.1307 +<p>
  1.1308 +[55:26] But that was the Internet, and I think Vernor Vinge was
  1.1309 +writing after that kind of thing had already started, but I don't
  1.1310 +know. Anyway.
  1.1311 +</p>
  1.1312 +<p>
  1.1313 +[55:41] Another&hellip;I mentioned H.G. Wells, <i>The Time Machine</i>. I
  1.1314 +recently discovered, because <a href="http://en.wikipedia.org/wiki/David_Lodge_(author)">David Lodge</a> had written a sort of
  1.1315 +semi-novel about him, that he had invented Wikipedia, in advance&mdash; he
  1.1316 +had this notion of an encyclopedia that was free to everybody, and
  1.1317 +everybody could contribute and [collaborate on it]. So, go to the
  1.1318 +science fiction writers to find out the future &mdash; well, a range of
  1.1319 +possible futures.
  1.1320 +</p>
  1.1321 +<p>
  1.1322 +Adam Ford: Well the thing is with science fiction writers, they have
  1.1323 +to maintain some sort of interest for their readers, after all the
  1.1324 +science fiction which reaches us is the stuff that publishers want to
  1.1325 +sell, and so there's a little bit of a &hellip; a bias towards making a
  1.1326 +plot device there, and so the dramatic sort of appeals to our
  1.1327 +amygdala, our lizard brain; we'll sort of stay there obviously to some
  1.1328 +extent. But I think that they do come up with sort of amazing ideas; I
  1.1329 +think it's worth trying to make these predictions; I think that we
  1.1330 +should more time on strategic forecasting, I mean take that seriously.
  1.1331 +</p>
  1.1332 +<p>
  1.1333 +Aaron Sloman: Well, I'm happy to leave that to others; I just want to
  1.1334 +try to understand these problems that bother me about how things
  1.1335 +work. And it may be that some would say that's irresponsible if I
  1.1336 +don't think about what the implications will be. Well, understanding
  1.1337 +how humans work <i>might</i> enable us to make [] humans &mdash; I suspect it
  1.1338 +wont happen in this century; I think it's going to be too difficult.
  1.1339 +</p></div>
  1.1340 +</div>
  1.1341 +</div>
  1.1342 +
  1.1343 +<div id="postamble">
  1.1344 +<p class="date">Date: 2013-10-04 18:49:53 UTC</p>
  1.1345 +<p class="author">Author: Dylan Holmes</p>
  1.1346 +<p class="creator">Org version 7.7 with Emacs version 23</p>
  1.1347 +<a href="http://validator.w3.org/check?uri=referer">Validate XHTML 1.0</a>
  1.1348 +
  1.1349 +</div>
  1.1350 +</body>
  1.1351 +</html>