# HG changeset patch # User Robert McIntyre # Date 1401816298 14400 # Node ID 414a10d51d9f910eaf1e9010e1bf9a2ce7074646 # Parent 174455d6d0ba23f05113ad817aaef2ceadf787c7 stuff from dylan? diff -r 174455d6d0ba -r 414a10d51d9f org/goodlit.org --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/org/goodlit.org Tue Jun 03 13:24:58 2014 -0400 @@ -0,0 +1,114 @@ +#+TITLE: Good readables +#+AUTHOR: Dylan Holmes +#+EMAIL: rlm@mit.edu +#+SETUPFILE: ../../aurellem/org/setup.org +#+INCLUDE: ../../aurellem/org/level-0.org +#+MATHJAX: align:"left" mathml:t path:"http://www.aurellem.org/MathJax/MathJax.js" +#+ + +* Non-narrative books +| Title | Author | About | +|----------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Formal Theories of the Commonsense World | Hobbs \amp{} Moore | Essays about the rules of intuitive physics---how we naturally reason about the physical world before we learn about (for example) molecules, inverse square laws, and quantum mechanics. | +| Vision | Marr | Describes successes in computer vision, and outlines effective policies for thinking about and researching intelligence. | +| The Structure and Interpretation of Classical Mechanics (SICM) | Sussman \amp{} Wisdom | This book uses programming as a tool for analysing otherwise-intractable physical systems, and as a medium for explicit, unambiguous expression of ideas\mdash{}as an alternative to mathematical notation, the meanings of which often depend on implicit conventions. | +| The Computer Revolution in Philosophy | Sloman | Elucidates the essential role of philosophy in science, and the newfound revolutionary role of computers and computational language in philosophy (especially in philosophy of mind). | +| Freedom Evolves | Dennett | An eminently readable book on how free will can develop within a deterministic universe in general, and in mechanistic robots in particular. | +| Philosophical Investigations | Wittgenstein | A seminal, if disconnected, philosophical treatise which describes how meanings of sentences arise through learned, lazily-evaluated \ldquo{}choreographies\rdquo{}. | +| The Emotion Machine | Minsky | A wonderful compendium of ideas about the construction of intelligence, presented in clever, non-technical language. | +| The Myth of Mental Illness | Szasz | A thought-provoking look at mental illness/treatment as a form of social control. | +| BUGS in Writing | Dupre | A whimsical and erudite handbook for writing clearly, with an emphasis on writing for science/technology. | +| The Visual Display of Quantitative Information | Tufte | A learned tour of good practices for organizing information and communicating large amounts of it with minimal clutter. | +| The Algebra of Programming | Odgen \amp{} de Moor | An abstract but deep textbook which describes how programs can be described, combined, optimized, and analyzed using a new kind of algebra. | +| Probability Theory: The logic of science | Jaynes | A heterodoxical textbook which presents probability theory as a kind of logical inference. | +| Category Theory | Awodey | Friendly and thorough introduction to the ideas of category theory \mdash{} less imposing than (but a good companion to) MacLane's /Categories for the Working Mathematician/. | +| Algebra: Chapter 0 | Aluffi | Friendly and thorough introduction to the ideas of abstract algebra\mdash{} less imposing than MacLane \amp{} Birkhoff's /Algebra/. | +| Principles of Quantum Mechanics | Shankar | A careful, solid introduction to quantum mechanics. Less chatty than Griffith's /Introduction to Quantum Mechanics/. | + + +* COMMENT Articles, etc. + +| Title | Author | About | +|---------------------------------------+----------+------------------------------------------------------------------------------------------------------------------| +| A Computer Model of Skill Acquisition | Sussman | Sussman explores learning how to do things as the process of developing and debugging programs to do them. He finds that this paradigm is an effective way to think about learning; for example, with each lesson, his program learns something new. He incidentally also concludes that automatic program-writers are impracticible, because ideas are often developed in a /cycle/ of suggesting/coding/debugging. | +| A Universal Banach Space | Leinster | Very brief, but neat: integration is a \ldquo{}universal object\rdquo{} (in the terminology of Category Theory.) | +| | | | + + +* Narratives +| Title | Author | About | +|--------------------------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| The Adventures of Sherlock Holmes | Doyle, Arthur Conan | An austere and brilliant detective who possesses razor-sharp skills in observation, analysis of character, and collecting mundane facts. I particularly like the Sherlock Holmes novels because Holmes's talents are apparently attainable through practice. | +| The Lives of Christopher Chant | Jones, Diana Wynne | An excellent story, with wonderously fun magic and enjoyable characters. Part of the /Chrestomanci/ series. | +| The Hitchhiker's Guide to the Galaxy | Adams, Douglas | A subtly hilarous existentialist novel about how bureaucratic incompetence leads to the destruction of planet Earth, and about the lone unfortunate British man who survives. | +| Matter | Banks, Iain | An epic science fiction novel. As with all novels in the /Culture/ series, /Matter/ is remarkable for the following: (1) Tangibly portraying the vastness of space and the incomprehensibility of advanced intelligence, (2) Artful diction, (3) Fearless use of unorthodox narrative structure. | +| The Magicians | Grossman, Lev | An approximate analogy: \ldquo{}The Magicians\rdquo{} is to \ldquo{}Narnia\rdquo{} as \ldquo{}Wicked\rdquo{} is to \ldquo{}Oz\rdquo{}. A gritty realistic book about a perennially morose overachiever and his adventures in relentlessly unsentimental places. | + +# Aaron Sloman +# Dennet's humor book +# Hofstadter's I Am a Strange Loop + +# Woolf's The Waves + + + + +# * Books Relating to Artificial Intelligence + +# ** The Emotion Machine, by Marvin Minsky +# ** Formal Theories of the Commmonsense World, by Hobbs and Moore +# ([[http://books.google.com/books/about/Formal_Theories_of_the_Commonsense_World.html?id=aUO2PCw-vdgC][link to Formal Theories]]) + +# ** Vision, by David Marr + +# ** Philosophical Investigations, by Ludwig Wittgenstein + +# ** Probability Theory: The Logic of Science, by Edwin Jaynes + +# ** The Algebra of Programming, by Odgen and de Moor + + +# * Other Exemplary Books + +# ** Real and Complex Analysis, by Walter Rudin +# This book is a model of terseness in exposition --- it's theoretically +# elegant, though pedagogically impenetrable. + +# ** Category Theory, by Steve Awodey +# This book is a friendly and exciting introduction to category theory. + +# ** Principles of Quantum Mechanics, by Ramamurti Shankar +# Rigorous yet engaging. + +# ** Algebra: Chapter 0, by Aluffi + +# ** BUGS in Writing, by Lynn Dupr\eacute{} + +# ** The Visual Display of Quantitative Information, by Edward Tufte + +# # Piaget +# # Hans Freudenthal +# ** How to Solve It, by Polya + + +# #* Interesting Papers + +# #** A Universal Banach Space, by Tom Leinster +# A short paper in which the set of Lebesgue-measureable functions on +# [0,1] emerges as the initial object in a category of Banach +# spaces, and in which integration is a catamorphism. + +# #** The Eudoxus Real numbers, by R. Arthan +# This paper constructs the real numbers in terms of +# proportionalities. + + +# http://arxiv.org/abs/math/0405454 + +# #** Tolerance Space Theory and Some Applications, by Sossinsky + +# #Tolerance spaces +# #The Eudoxus real numbers + + +# # AARON SLOMAN diff -r 174455d6d0ba -r 414a10d51d9f org/moon.org --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/org/moon.org Tue Jun 03 13:24:58 2014 -0400 @@ -0,0 +1,20 @@ +#+BEGIN_VERSE +Heaven is the darkness of a moon in a mirror +In whose dark glass, long-lost love refracts into a promise +And our incipient sorrows witness their imminent conclusion +In mere gray inches of earth. + +But before this framed firmament, we can merely stand and sigh +And see our lives begun knotted with their own undoing +Before this stygian wall, this shroud of narcissus. + +Yet its stillness is not our stillness, +And beyond the burnished gleaming edges of its flat sky +A new sun rises +Gilding the vast shoreline of our memory +And the dawn-fogged peaks of our unflagging aspiration. +#+END_VERSE + +# aspire next. +# towards which we steadfastly tread. +# steadfast aspiration. diff -r 174455d6d0ba -r 414a10d51d9f org/sloman-old.html --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/org/sloman-old.html Tue Jun 03 13:24:58 2014 -0400 @@ -0,0 +1,1348 @@ + + + + +Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview + + + + + + + + + + + + + + + +
+

Transcript of Aaron Sloman - Artificial Intelligence - Psychology - Oxford Interview

+ + +
+ + + + + + + + + + + + + + + + +

+Editor's note: This is a working draft transcript which I made of +this nice interview of Aaron Sloman. Having just finished one +iteration of transcription, I still need to go in and clean up the +formatting and fix the parts that I misheard, so you can expect the +text to improve significantly in the near future. +

+

+To the extent that this is my work, you have my permission to make +copies of this transcript for your own purposes. Also, feel free to +e-mail me with comments or corrections. +

+

+You can send mail to transcript@aurellem.org. +

+

+Cheers, +

+

+—Dylan +

+
+ + + + + +
+

Table of Contents

+
+ +
+
+ +
+

1 Introduction

+
+ + + +
+ +
+

1.1 Aaron Sloman evolves into a philosopher of AI

+
+ +

[0:09] My name is Aaron Sloman. My first degree many years ago in +Capetown University was in Physics and Mathematics, and I intended to +go and be a mathematician. I came to Oxford and encountered +philosophers — I had started reading philosophy and discussing +philosophy before then, and then I found that there were philosophers +who said things about mathematics that I thought were wrong, so +gradually got more and more involved in [philosophy] discussions and +switched to doing philosophy DPhil. Then I became a philosophy +lecturer and about six years later, I was introduced to artificial +intelligence when I was a lecturer at Sussex University in philosophy +and I very soon became convinced that the best way to make progress in +both areas of philosophy (including philosophy of mathematics which I +felt i hadn't dealt with adequately in my DPhil) about the philosophy +of mathematics, philosophy of mind, philsophy of language and all +those things—the best way was to try to design and test working +fragments of mind and maybe eventually put them all together but +initially just working fragments that would do various things. +

+

+[1:12] And I learned to program and ~ with various other people +including ~Margaret Boden whom you've interviewed, developed—helped +develop an undergraduate degree in AI and other things and also began +to do research in AI and so on which I thought of as doing philosophy, +primarily. +

+

+[1:29] And then I later moved to the University of Birmingham and I +was there — I came in 1991 — and I've been retired for a while but +I'm not interested in golf or gardening so I just go on doing full +time research and my department is happy to keep me on without paying +me and provide space and resources and I come, meeting bright people +at conferences and try to learn and make progress if I can. +

+
+ +
+ +
+

1.2 AI is hard, in part because there are tempting non-problems.

+
+ + +

+One of the things I learnt and understood more and more over the many +years — forty years or so since I first encountered AI — is how +hard the problems are, and in part that's because it's very often +tempting to think the problem is something different from what it +actually is, and then people design solutions to the non-problems, and +I think of most of my work now as just helping to clarify what the +problems are: what is it that we're trying to explain — and maybe +this is leading into what you wanted to talk about: +

+

+I now think that one of the ways of getting a deep understanding of +that is to find out what were the problems that biological evolution +solved, because we are a product of many solutions to many +problems, and if we just try to go in and work out what the whole +system is doing, we may get it all wrong, or badly wrong. +

+ +
+
+ +
+ +
+

2 What problems of intelligence did evolution solve?

+
+ + + +
+ +
+

2.1 Intelligence consists of solutions to many evolutionary problems; no single development (e.g. communication) was key to human-level intelligence.

+
+ + +

+[2:57] Well, first I would challenge that we are the dominant +species. I know it looks like that but actually if you count biomass, +if you count number of species, if you count number of individuals, +the dominant species are microbes — maybe not one of them but anyway +they're the ones who dominate in that sense, and furthermore we are +mostly — we are largely composed of microbes, without which we +wouldn't survive. +

+ +

+[3:27] But there are things that make humans (you could say) best at +those things, or worst at those things, but it's a combination. And I +think it was a collection of developments of which there isn't any +single one. [] there might be, some people say, human language which +changed everything. By our human language, they mean human +communication in words, but I think that was a later development from +what must have started as the use of internal forms of +representation — which are there in nest-building birds, in +pre-verbal children, in hunting mammals — because you can't take in +information about a complex structured environment in which things can +change and you may have to be able to work out what's possible and +what isn't possible, without having some way of representing the +components of the environment, their relationships, the kinds of +things they can and can't do, the kinds of things you might or might +not be able to do — and that kind of capability needs internal +languages, and I and colleagues [at Birmingham] have been referring to +them as generalized languages because some people object to +referring…to using language to refer to something that isn't used +for communication. But from that viewpoint, not only humans but many +other animals developed abilities to do things to their environment to +make them more friendly to themselves, which depended on being able to +represent possible futures, possible actions, and work out what's the +best thing to do. +

+

+[5:13] And nest-building in corvids for instance—crows, magpies, + [hawks], and so on — are way beyond what current robots can do, and + in fact I think most humans would be challenged if they had to go and + find a collection of twigs, one at a time, maybe bring them with just + one hand — or with your mouth — and assemble them into a + structure that, you know, is shaped like a nest, and is fairly rigid, + and you could trust your eggs in them when wind blows. But they're + doing it, and so … they're not our evolutionary ancestors, but + they're an indication — and that example is an indication — of + what must have evolved in order to provide control over the + environment in that species. +

+
+ +
+ +
+

2.2 Speculation about how communication might have evolved from internal lanagues.

+
+ +

[5:56] And I think hunting mammals, fruit-picking mammals, mammals +that can rearrange parts of the environment, provide shelters, needed +to have …. also needed to have ways of representing possible +futures, not just what's there in the environment. I think at a later +stage, that developed into a form of communication, or rather the +internal forms of representation became usable as a basis for +providing [context] to be communicated. And that happened, I think, +initially through performing actions that expressed intentions, and +probably led to situtations where an action (for instance, moving some +large object) was performed more easily, or more successfully, or more +accurately if it was done collaboratively. So someone who had worked +out what to do might start doing it, and then a conspecific might be +able to work out what the intention is, because that person has the +same forms of representation and can build theories about what's +going on, and might then be able to help. +

+

+[7:11] You can imagine that if that started happening more (a lot of +collaboration based on inferred intentions and plans) then sometimes +the inferences might be obscure and difficult, so the actions might +be enhanced to provide signals as to what the intention is, and what +the best way is to help, and so on. +

+

+[7:35] So, this is all handwaving and wild speculation, but I think +it's consistent with a large collection of facts which one can look at +— and find if one looks for them, but one won't know if [some]one +doesn't look for them — about the way children, for instance, who +can't yet talk, communicate, and the things they'll do, like going to +the mother and turning the face to point in the direction where the +child wants it to look and so on; that's an extreme version of action +indicating intention. +

+

+[8:03] Anyway. That's a very long roundabout answer to one conjecture +that the use of communicative language is what gave humans their +unique power to create and destroy and whatever, and I'm saying that +if by that you mean communicative language, then I'm saying there +was something before that which was non-communicative language, and I +suspect that noncommunicative language continues to play a deep role +in all human perception —in mathematical and scientific reasoning, in +problem solving — and we don't understand very much about it. +

+

+[8:48] +I'm sure there's a lot more to be said about the development of +different kinds of senses, the development of brain structures and +mechanisms is above all that, but perhaps I've droned on long enough +on that question. +

+ +
+
+ +
+ +
+

3 How do language and internal states relate to AI?

+
+ + +

+[9:09] Well, I think most of the human and animal capabilities that +I've been referring to are not yet to be found in current robots or +[computing] systems, and I think there are two reasons for that: one +is that it's intrinsically very difficult; I think that in particular +it may turn out that the forms of information processing that one can +implement on digital computers as we currently know them may not be as +well suited to performing some of these tasks as other kinds of +computing about which we don't know so much — for example, I think +there may be important special features about chemical computers +which we might [talk about in a little bit? find out about]. +

+ +
+ +
+

3.1 In AI, false assumptions can lead investigators astray.

+
+ +

[9:57] So, one of the problems then is that the tasks are hard … but +there's a deeper problem as to why AI hasn't made a great deal of +progress on these problems that I'm talking about, and that is that +most AI researchers assume things—and this is not just AI +researchers, but [also] philsophers, and psychologists, and people +studying animal behavior—make assumptions about what it is that +animals or humans do, for instance make assumptions about what vision +is for, or assumptions about what motivation is and how motivation +works, or assumptions about how learning works, and then they try --- +the AI people try — to model [or] build systems that perform those +assumed functions. So if you get the functions wrong, then even if +you implement some of the functions that you're trying to implement, +they won't necessarily perform the tasks that the initial objective +was to imitate, for instance the tasks that humans, and nest-building +birds, and monkeys and so on can perform. +

+
+ +
+ +
+

3.2 Example: Vision is not just about finding surfaces, but about finding affordances.

+
+ +

[11:09] I'll give you a simple example — well, maybe not so simple, +but — It's often assumed that the function of vision in humans (and +in other animals with good eyesight and so on) is to take in optical +information that hits the retina, and form into the (maybe changing +— or, really, in our case definitely changing) patterns of +illumination where there are sensory receptors that detect those +patterns, and then somehow from that information (plus maybe other +information gained from head movement or from comparisons between two +eyes) to work out what there was in the environment that produced +those patterns, and that is often taken to mean “where were the +surfaces off which the light bounced before it came to me”. So +you essentially think of the task of the visual system as being to +reverse the image formation process: so the 3D structure's there, the +lens causes the image to form in the retina, and then the brain goes +back to a model of that 3D structure there. That's a very plausible +theory about vision, and it may be that that's a subset of what +human vision does, but I think James Gibson pointed out that that kind +of thing is not necessarily going to be very useful for an organism, +and it's very unlikely that that's the main function of perception in +general, namely to produce some physical description of what's out +there. +

+

+[12:37] What does an animal need? It needs to know what it can do, +what it can't do, what the consequences of its actions will be +…. so, he introduced the word affordance, so from his point of +view, the function of vision, perception, are to inform the organism +of what the affordances are for action, where that would mean what +the animal, given its morphology (what it can do with its mouth, its +limbs, and so on, and the ways it can move) what it can do, what its +needs are, what the obstacles are, and how the environment supports or +obstructs those possible actions. +

+

+[13:15] And that's a very different collection of information +structures that you need from, say, “where are all the +surfaces?”: if you've got all the surfaces, deriving the +affordances would still be a major task. So, if you think of the +perceptual system as primarily (for biological organisms) being +devices that provide information about affordances and so on, then the +tasks look very different. And most of the people working, doing +research on computer vision in robots, I think haven't taken all that +on board, so they're trying to get machines to do things which, even +if they were successful, would not make the robots very intelligent +(and in fact, even the ones they're trying to do are not really easy +to do, and they don't succeed very well— although, there's progress; +I shouldn't disparage it too much.) +

+
+ +
+ +
+

3.3 Online and offline intelligence

+
+ + +

+[14:10] It gets more complex as animals get more sophisticated. So, I +like to make a distinction between online intelligence and offline +intelligence. So, for example, if I want to pick something up — like +this leaf <he plucks a leaf from the table> — I was able to select +it from all the others in there, and while moving my hand towards it, +I was able to guide its trajectory, making sure it was going roughly +in the right direction — as opposed to going out there, which +wouldn't have been able to pick it up — and these two fingers ended +up with a portion of the leaf between them, so that I was able to tell +when I'm ready to do that <he clamps the leaf between two fingers> +and at that point, I clamped my fingers and then I could pick up the +leaf. +

+

+[14:54] Whereas, — and that's an example of online intelligence: +during the performance of an action (both from the stage where it's +initiated, and during the intermediate stages, and where it's +completed) I'm taking in information relevant to controlling all those +stages, and that relevant information keeps changing. That means I +need stores of transient information which gets discarded almost +immediately and replaced or something. That's online intelligence. And +there are many forms; that's just one example, and Gibson discussed +quite a lot of examples which I won't try to replicate now. +

+

+[15:30] But in offline intelligence, you're not necessarily actually +performing the actions when you're using your intelligence; you're +thinking about possible actions. So, for instance, I could think +about how fast or by what route I would get back to the lecture room +if I wanted to [get to the next talk] or something. And I know where +the door is, roughly speaking, and I know roughly which route I would +take, when I go out, I should go to the left or to the right, because +I've stored information about where the spaces are, where the +buildings are, where the door was that we came out — but in using +that information to think about that route, I'm not actually +performing the action. I'm not even simulating it in detail: the +precise details of direction and speed and when to clamp my fingers, +or when to contract my leg muscles when walking, are all irrelevant to +thinking about a good route, or thinking about the potential things +that might happen on the way. Or what would be a good place to meet +someone who I think [for an acquaintance in particular] — [barber] +or something — I don't necessarily have to work out exactly where +the person's going to stand, or from what angle I would recognize +them, and so on. +

+

+[16:46] So, offline intelligence — which I think became not just a +human competence; I think there are other animals that have aspects of +it: Squirrels are very impressive as you watch them. Gray squirrels at +any rate, as you watch them defeating squirrel-proof birdfeeders, seem +to have a lot of that [offline intelligence], as well as the online +intelligence when they eventually perform the action they've worked +out [] that will get them to the nuts. +

+

+[17:16] And I think that what happened during our evolution is that +mechanisms for acquiring and processing and storing and manipulating +information that is more and more remote from the performance of +actions developed. An example is taking in information about where +locations are that you might need to go to infrequently: There's a +store of a particular type of material that's good for building on +roofs of houses or something out around there in some +direction. There's a good place to get water somewhere in another +direction. There are people that you'd like to go and visit in +another place, and so on. +

+

+[17:59] So taking in information about an extended environment and +building it into a structure that you can make use of for different +purposes is another example of offline intelligence. And when we do +that, we sometimes use only our brains, but in modern times, we also +learned how to make maps on paper and walls and so on. And it's not +clear whether the stuff inside our heads has the same structures as +the maps we make on paper: the maps on paper have a different +function; they may be used to communicate with others, or meant for +looking at, whereas the stuff in your head you don't look at; you +use it in some other way. +

+

+[18:46] So, what I'm getting at is that there's a great deal of human +intelligence (and animal intelligence) which is involved in what's +possible in the future, what exists in distant places, what might have +happened in the past (sometimes you need to know why something is as +it is, because that might be relevant to what you should or shouldn't +do in the future, and so on), and I think there was something about +human evolution that extended that offline intelligence way beyond +that of animals. And I don't think it was just human language, (but +human language had something to do with it) but I think there was +something else that came earlier than language which involves the +ability to use your offline intelligence to discover something that +has a rich mathematical structure. +

+
+ +
+ +
+

3.4 Example: Even toddlers use sophisticated geometric knowledge

+
+ +

[19:44] I'll give you a simple example: if you look through a gap, you +can see something that's on the other side of the gap. Now, you +might see what you want to see, or you might see only part of it. If +you want to see more of it, which way would you move? Well, you could +either move sideways, and see through the gap—and see it roughly +the same amount but a different part of it [if it's a ????], or you +could move towards the gap and then your view will widen as you +approach the gap. Now, there's a bit of mathematics in there, insofar +as you are implicitly assuming that information travels in straight +lines, and as you go closer to a gap, the straight lines that you can +draw from where you are through the gap, widen as you approach that +gap. Now, there's a kind of theorem of Euclidean geometry in there +which I'm not going to try to state very precisely (and as far as I +know, wasn't stated explicitly in Euclidean geometry) but it's +something every toddler— human toddler—learns. (Maybe other +animals also know it, I don't know.) But there are many more things, +actions to perform, to get you more information about things, actions +to perform to conceal information from other people, actions that will +enable you to operate, to act on a rigid object in one place in order +to produce an effect on another place. So, there's a lot of stuff that +involves lines and rotations and angles and speeds and so on that I +think humans (maybe, to a lesser extent, other animals) develop the +ability to think about in a generic way. That means that you could +take out the generalizations from the particular contexts and then +re-use them in a new contexts in ways that I think are not yet +represented at all in AI and in theories of human learning in any [] +way — although some people are trying to study learning of mathematics. +

+
+
+ +
+ +
+

4 Animal intelligence

+
+ + + +
+ +
+

4.1 The priority is cataloguing what competences have evolved, not ranking them.

+
+ +

[22:03] I wasn't going to challenge the claim that humans can do more +sophisticated forms of [tracking], just to mention that there are some +things that other animals can do which are in some ways comparable, +and some ways superior to [things] that humans can do. In particular, +there are species of birds and also, I think, some rodents --- +squirrels, or something — I don't know enough about the variety --- +that can hide nuts and remember where they've hidden them, and go back +to them. And there have been tests which show that some birds are able +to hide tens — you know, [eighteen] or something nuts — and to +remember which ones have been taken, which ones haven't, and so +on. And I suspect most humans can't do that. I wouldn't want to say +categorically that maybe we couldn't, because humans are very +[varied], and also [a few] people can develop particular competences +through training. But it's certainly not something I can do. +

+ +
+ +
+ +
+

4.2 AI can be used to test philosophical theories

+
+ +

[23:01] But I also would like to say that I am not myself particularly +interested in trying to align animal intelligences according to any +kind of scale of superiority; I'm just trying to understand what it +was that biological evolution produced, and how it works, and I'm +interested in AI mainly because I think that when one comes up with +theories about how these things work, one needs to have some way of +testing the theory. And AI provides ways of implementing and testing +theories that were not previously available: Immanuel Kant was trying +to come up with theories about how minds work, but he didn't have any +kind of a mechanism that he could build to test his theory about the +nature of mathematical knowledge, for instance, or how concepts were +developed from babyhood onward. Whereas now, if we do develop a +theory, we have a criterion of adequacy, namely it should be precise +enough and rich enough and detailed to enable a model to be +built. And then we can see if it works. +

+

+[24:07] If it works, it doesn't mean we've proved that the theory is +correct; it just shows it's a candidate. And if it doesn't work, then +it's not a candidate as it stands; it would need to be modified in +some way. +

+
+
+ +
+ +
+

5 Is abstract general intelligence feasible?

+
+ + + +
+ +
+

5.1 It's misleading to compare the brain and its neurons to a computer made of transistors

+
+ +

[24:27] I think there's a lot of optimism based on false clues: +the…for example, one of the false clues is to count the number of +neurons in the brain, and then talk about the number of transistors +you can fit into a computer or something, and then compare them. It +might turn out that the study of the way synapses work (which leads +some people to say that a typical synapse [] in the human brain has +computational power comparable to the Internet a few years ago, +because of the number of different molecules that are doing things, +the variety of types of things that are being done in those molecular +interactions, and the speed at which they happen, if you somehow count +up the number of operations per second or something, then you get +these comparable figures). +

+
+ +
+ +
+

5.2 For example, brains may rely heavily on chemical information processing

+
+ +

Now even if the details aren't right, there may just be a lot of +information processing that…going on in brains at the molecular +level, not the neural level. Then, if that's the case, the processing +units will be orders of magnitude larger in number than the number of +neurons. And it's certainly the case that all the original biological +forms of information processing were chemical; there weren't brains +around, and still aren't in most microbes. And even when humans grow +their brains, the process of starting from a fertilized egg and +producing this rich and complex structure is, for much of the time, +under the control of chemical computations, chemical information +processing—of course combined with physical sorts of materials and +energy and so on as well. +

+

+[26:25] So it would seem very strange if all that capability was +something thrown away when you've got a brain and all the information +processing, the [challenges that were handled in making a brain], +… This is handwaving on my part; I'm just saying that we might +learn that what brains do is not what we think they do, and that +problems of replicating them are not what we think they are, solely in +terms of numerical estimate of time scales, the number of components, +and so on. +

+
+ +
+ +
+

5.3 Brain algorithms may simply be optimized for certain kinds of information processing other than bit manipulations

+
+ +

[26:56] But apart from that, the other basis of skepticism concerns +how well we understand what the problems are. I think there are many +people who try to formalize the problems of designing an intelligent +system in terms of streams of information thought of as bit streams or +collections of bit streams, and they think of as the problems of +intelligence as being the construction or detection of patterns in +those, and perhaps not just detection of patterns, but detection of +patterns that are useable for sending out streams to control motors +and so on in order to []. And that way of conceptualizing the problem +may lead on the one hand to oversimplification, so that the things +that would be achieved, if those goals were achieved, maybe much +simpler, in some ways inadequate. Or the replication of human +intelligence, or the matching of human intelligence—or for that +matter, squirrel intelligence—but in another way, it may also make +the problem harder: it may be that some of the kinds of things that +biological evolution has achieved can't be done that way. And one of +the ways that might turn out to be the case is not because it's not +impossible in principle to do some of the information processing on +artificial computers-based-on-transistors and other bit-manipulating +[]—but it may just be that the computational complexity of solving +problems, processes, or finding solutions to complex problems, are +much greater and therefore you might need a much larger universe than +we have available in order to do things. +

+
+ +
+ +
+

5.4 Example: find the shortest path by dangling strings

+
+ +

[28:55] Then if the underlying mechanisms were different, the +information processing mechanisms, they might be better tailored to +particular sorts of computation. There's a [] example, which is +finding the shortest route if you've got a collection of roads, and +they may be curved roads, and lots of tangled routes from A to B to C, +and so on. And if you start at A and you want to get to Z — a place +somewhere on that map — the process of finding the shortest route +will involve searching through all these different possibilities and +rejecting some that are longer than others and so on. But if you make +a model of that map out of string, where these strings are all laid +out on the maps and so have the lengths of the routes. Then if you +hold the two knots in the string – it's a network of string — which +correspond to the start point and end point, then pull, then the +bits of string that you're left with in a straight line will give you +the shortest route, and that process of pulling just gets you the +solution very rapidly in a parallel computation, where all the others +just hang by the wayside, so to speak. +

+
+ +
+ +
+

5.5 In sum, we know surprisingly little about the kinds of problems that evolution solved, and the manner in which they were solved.

+
+ +

[30:15] Now, I'm not saying brains can build networks of string and +pull them or anything like that; that's just an illustration of how if +you have the right representation, correctly implemented—or suitably +implemented—for a problem, then you can avoid very combinatorially +complex searches, which will maybe grow exponentially with the number +of components in your map, whereas with this thing, the time it takes +won't depend on how many strings you've [got on the map]; you just +pull, and it will depend only on the shortest route that exists in +there. Even if that shortest route wasn't obvious on the original map. +

+ +

+[30:59] So that's a rather long-winded way of formulating the +conjecture which—of supporting, a roundabout way of supporting the +conjecture that there may be something about the way molecules perform +computations where they have the combination of continuous change as +things move through space and come together and move apart, and +whatever — and also snap into states that then persist, so [as you +learn from] quantum mechanics, you can have stable molecular +structures which are quite hard to separate, and then in catalytic +processes you can separate them, or extreme temperatures, or strong +forces, but they may nevertheless be able to move very rapidly in some +conditions in order to perform computations. +

+

+[31:49] Now there may be things about that kind of structure that +enable searching for solutions to certain classes of problems to be +done much more efficiently (by brain) than anything we could do with +computers. It's just an open question. +

+

+[32:04] So it might turn out that we need new kinds of technology +that aren't on the horizon in order to replicate the functions that +animal brains perform —or, it might not. I just don't know. I'm not +claiming that there's strong evidence for that; I'm just saying that +it might turn out that way, partly because I think we know less than +many people think we know about what biological evolution achieved. +

+

+[32:28] There are some other possibilities: we may just find out that +there are shortcuts no one ever thought of, and it will all happen +much more quickly—I have an open mind; I'd be surprised, but it +could turn up. There is something that worries me much more than the +singularity that most people talk about, which is machines achieving +human-level intelligence and perhaps taking over [the] planet or +something. There's what I call the singularity of cognitive catch-up … +

+
+
+ +
+ +
+

6 A singularity of cognitive catch-up

+
+ + + +
+ +
+

6.1 What if it will take a lifetime to learn enough to make something new?

+
+ +

… SCC, singularity of cognitive catch-up, which I think we're close +to, or maybe have already reached—I'll explain what I mean by +that. One of the products of biological evolution—and this is one of +the answers to your earlier questions which I didn't get on to—is +that humans have not only the ability to make discoveries that none of +their ancestors have ever made, but to shorten the time required for +similar achievements to be reached by their offspring and their +descendants. So once we, for instance, worked out ways of complex +computations, or ways of building houses, or ways of finding our way +around, we don't need…our children don't need to work it out for +themselves by the same lengthy trial and error procedure; we can help +them get there much faster. +

+

+Okay, well, what I've been referring to as the singularity of +cognitive catch-up depends on the fact that—fairly obvious, and it's +often been commented on—that in case of humans, it's not necessary +for each generation to learn what previous generations learned in the same way. And we can speed up learning once something has been +learned, [it is able to] be learned by new people. And that has meant +that the social processes that support that kind of education of the +young can enormously accelerate what would have taken…perhaps +thousands [or] millions of years for evolution to produce, can happen in +a much shorter time. +

+ +

+[34:54] But here's the catch: in order for a new advance to happen --- +so for something new to be discovered that wasn't there before, like +Newtonian mechanics, or the theory of relativity, or Beethoven's music +or [style] or whatever — the individuals have to have traversed a +significant amount of what their ancestors have learned, even if they +do it much faster than their ancestors, to get to the point where they +can see the gaps, the possibilities for going further than their +ancestors, or their parents or whatever, have done. +

+

+[35:27] Now in the case of knowledge of science, mathematics, +philosophy, engineering and so on, there's been a lot of accumulated +knowledge. And humans are living a bit longer than they used to, but +they're still living for [whatever it is], a hundred years, or for +most people, less than that. So you can imagine that there might come +a time when in a normal human lifespan, it's not possible for anyone +to learn enough to understand the scope and limits of what's already +been achieved in order to see the potential for going beyond it and to +build on what's already been done to make that…those future steps. +

+

+[36:10] So if we reach that stage, we will have reached the +singularity of cognitive catch-up because the process of education +that enables individuals to learn faster than their ancestors did is +the catching-up process, and it may just be that we at some point +reach a point where catching up can only happen within a lifetime of +an individual, and after that they're dead and they can't go +beyond. And I have some evidence that there's a lot of that around +because I see a lot of people coming up with what they think of as +new ideas which they've struggled to come up with, but actually they +just haven't taken in some of what was…some of what was done [] by +other people, in other places before them. And I think that despite +the availability of search engines which make it easier for people +to get the information—for instance, when I was a student, if I +wanted to find out what other people had done in the field, it was a +laborious process—going to the library, getting books, and +—whereas now, I can often do things in seconds that would have taken +hours. So that means that if seconds [are needed] for that kind of +work, my lifespan has been extended by a factor of ten or +something. So maybe that delays the singularity, but it may not +delay it enough. But that's an open question; I don't know. And it may +just be that in some areas, this is more of a problem than others. For +instance, it may be that in some kinds of engineering, we're handing +over more and more of the work to machines anyways and they can go on +doing it. So for instance, most of the production of computers now is +done by a computer-controlled machine—although some of the design +work is done by humans— a lot of detail of the design is done by +computers, and they produce the next generation, which then produces +the next generation, and so on. +

+

+[37:57] I don't know if humans can go on having major advances, so +it'll be kind of sad if we can't. +

+
+
+ +
+ +
+

7 Spatial reasoning: a difficult problem

+
+ + +

+[38:15] Okay, well, there are different problems [ ] mathematics, and +they have to do with properties. So for instance a lot of mathematics +that can be expressed in terms of logical structures or algebraic +structures and those are pretty well suited for manipulation and…on +computers, and if a problem can be specified using the +logical/algebraic notation, and the solution method requires creating +something in that sort of notation, then computers are pretty good, +and there are lots of mathematical tools around—there are theorem +provers and theorem checkers, and all kinds of things, which couldn't +have existed fifty, sixty years ago, and they will continue getting +better. +

+ +

+But there was something that I was alluding to earlier when I gave the +example of how you can reason about what you will see by changing your +position in relation to a door, where what you are doing is using your +grasp of spatial structures and how as one spatial relationship +changes namely you come closer to the door or move sideways and +parallel to the wall or whatever, other spatial relationships change +in parallel, so the lines from your eyes through to other parts of +the…parts of the room on the other side of the doorway change, +spread out more as you go towards the doorway, and as you move +sideways, they don't spread out differently, but focus on different +parts of the internal … that they access different parts of the +… of the room. +

+

+Now, those are examples of ways of thinking about relationships and +changing relationships which are not the same as thinking about what +happens if I replace this symbol with that symbol, or if I substitute +this expression in that expression in a logical formula. And at the +moment, I do not believe that there is anything in AI amongst the +mathematical reasoning community, the theorem-proving community, that +can model the processes that go on when a young child starts learning +to do Euclidean geometry and is taught things about—for instance, I +can give you a proof that the angles of any triangle add up to a +straight line, 180 degrees. +

+ +
+ +
+

7.1 Example: Spatial proof that the angles of any triangle add up to a half-circle

+
+ +

There are standard proofs which involves starting with one triangle, +then adding a line parallel to the base one of my former students, +Mary Pardoe, came up with which I will demonstrate with this <he holds +up a pen> — can you see it? If I have a triangle here that's got +three sides, if I put this thing on it, on one side — let's say the +bottom—I can rotate it until it lies along the second…another +side, and then maybe move it up to the other end ~. Then I can rotate +it again, until it lies on the third side, and move it back to the +other end. And then I'll rotate it again and it'll eventually end up +on the original side, but it will have changed the direction it's +pointing in — and it won't have crossed over itself so it will have +gone through a half-circle, and that says that the three angles of a +triangle add up to the rotations of half a circle, which is a +beautiful kind of proof and almost anyone can understand it. Some +mathematicians don't like it, because they say it hides some of the +assumptions, but nevertheless, as far as I'm concerned, it's an +example of a human ability to do reasoning which, once you've +understood it, you can see will apply to any triangle — it's got to +be a planar triangle — not a triangle on a globe, because then the +angles can add up to more than … you can have three right angles +if you have an equator…a line on the equator, and a line going up to +to the north pole of the earth, and then you have a right angle and +then another line going down to the equator, and you have a right +angle, right angle, right angle, and they add up to more than a +straight line. But that's because the triangle isn't in the plane, +it's on a curved surface. In fact, that's one of the +differences…definitional differences you can take between planar and +curved surfaces: how much the angles of a triangle add up to. But our +ability to visualize and notice the generality in that process, and +see that you're going to be able to do the same thing using triangles +that stretch in all sorts of ways, or if it's a million times as +large, or if it's made…you know, written on, on…if it's drawn in +different colors or whatever — none of that's going to make any +difference to the essence of that process. And that ability to see +the commonality in a spatial structure which enables you to draw some +conclusions with complete certainty—subject to the possibility that +sometimes you make mistakes, but when you make mistakes, you can +discover them, as has happened in the history of geometrical theorem +proving. Imre Lakatos had a wonderful book called Proofs and Refutations — which I won't try to summarize — but he has +examples: mistakes were made; that was because people didn't always +realize there were subtle subcases which had slightly different +properties, and they didn't take account of that. But once they're +noticed, you rectify that. +

+
+ +
+ +
+

7.2 Geometric results are fundamentally different than experimental results in chemistry or physics.

+
+ +

[43:28] But it's not the same as doing experiments in chemistry and +physics, where you can't be sure it'll be the same on [] or at a high +temperature, or in a very strong magnetic field — with geometric +reasoning, in some sense you've got the full information in front of +you; even if you don't always notice an important part of it. So, that +kind of reasoning (as far as I know) is not implemented anywhere in a +computer. And most people who do research on trying to model +mathematical reasoning, don't pay any attention to that, because of +… they just don't think about it. They start from somewhere else, +maybe because of how they were educated. I was taught Euclidean +geometry at school. Were you? +

+

+(Adam ford: Yeah) +

+

+Many people are not now. Instead they're taught set theory, and +logic, and arithmetic, and [algebra], and so on. And so they don't use +that bit of their brains, without which we wouldn't have built any of +the cathedrals, and all sorts of things we now depend on. +

+
+
+ +
+ +
+

8 Is near-term artificial general intelligence likely?

+
+ + + +
+ +
+

8.1 Two interpretations: a single mechanism for all problems, or many mechanisms unified in one program.

+
+ + +

+[44:35] Well, this relates to what's meant by general. And when I +first encountered the AGI community, I thought that what they all +meant by general intelligence was uniform intelligence --- +intelligence based on some common simple (maybe not so simple, but) +single powerful mechanism or principle of inference. And there are +some people in the community who are trying to produce things like +that, often in connection with algorithmic information theory and +computability of information, and so on. But there's another sense of +general which means that the system of general intelligence can do +lots of different things, like perceive things, understand language, +move around, make things, and so on — perhaps even enjoy a joke; +that's something that's not nearly on the horizon, as far as I +know. Enjoying a joke isn't the same as being able to make laughing +noises. +

+

+Given, then, that there are these two notions of general +intelligence—there's one that looks for one uniform, possibly +simple, mechanism or collection of ideas and notations and algorithms, +that will deal with any problem that's solvable — and the other +that's general in the sense that it can do lots of different things +that are combined into an integrated architecture (which raises lots +of questions about how you combine these things and make them work +together) and we humans, certainly, are of the second kind: we do all +sorts of different things, and other animals also seem to be of the +second kind, perhaps not as general as humans. Now, it may turn out +that in some near future time, who knows—decades, a few +decades—you'll be able to get machines that are capable of solving +in a time that will depend on the nature of the problem, but any +problem that is solvable, and they will be able to do it in some sort +of tractable time — of course, there are some problems that are +solvable that would require a larger universe and a longer history +than the history of the universe, but apart from that constraint, +these machines will be able to do anything []. But to be able to do +some of the kinds of things that humans can do, like the kinds of +geometrical reasoning where you look at the shape and you abstract +away from the precise angles and sizes and shapes and so on, and +realize there's something general here, as must have happened when our +ancestors first made the discoveries that eventually put together in +Euclidean geometry. +

+

+It may be that that requires mechanisms of a kind that we don't know +anything about at the moment. Maybe brains are using molecules and +rearranging molecules in some way that supports that kind of +reasoning. I'm not saying they are — I don't know, I just don't see +any simple…any obvious way to map that kind of reasoning capability +onto what we currently do on computers. There is—and I just +mentioned this briefly beforehand—there is a kind of thing that's +sometimes thought of as a major step in that direction, namely you can +build a machine (or a software system) that can represent some +geometrical structure, and then be told about some change that's going +to happen to it, and it can predict in great detail what'll +happen. And this happens for instance in game engines, where you say +we have all these blocks on the table and I'll drop one other block, +and then [the thing] uses Newton's laws and properties of rigidity of +the parts and the elasticity and also stuff about geometries and space +and so on, to give you a very accurate representation of what'll +happen when this brick lands on this pile of things, [it'll bounce and +go off, and so on]. And you just, with more memory and more CPU power, +you can increase the accuracy— but that's totally different than +looking at one example, and working out what will happen in a whole +range of cases at a higher level of abstraction, whereas the game +engine does it in great detail for just this case, with just those +precise things, and it won't even know what the generalizations are +that it's using that would apply to others []. So, in that sense, [we] +may get AGI — artificial general intelligence — pretty soon, but +it'll be limited in what it can do. And the other kind of general +intelligence which combines all sorts of different things, including +human spatial geometrical reasoning, and maybe other things, like the +ability to find things funny, and to appreciate artistic features and +other things may need forms of pattern-mechanism, and I have an open +mind about that. +

+
+
+ +
+ +
+

9 Abstract General Intelligence impacts

+
+ + +

+[49:53] Well, as far as the first type's concerned, it could be useful +for all kinds of applications — there are people who worry about +where there's a system that has that type of intelligence, might in +some sense take over control of the planet. Well, humans often do +stupid things, and they might do something stupid that would lead to +disaster, but I think it's more likely that there would be other +things [] lead to disaster— population problems, using up all the +resources, destroying ecosystems, and whatever. But certainly it would +go on being useful to have these calculating devices. Now, as for the +second kind of them, I don't know—if we succeeded at putting +together all the parts that we find in humans, we might just make an +artificial human, and then we might have some of them as your friends, +and some of them we might not like, and some of them might become +teachers or whatever, composers — but that raises a question: could +they, in some sense, be superior to us, in their learning +capabilities, their understanding of human nature, or maybe their +wickedness or whatever — these are all issues in which I expect the +best science fiction writers would give better answers than anything I +could do, but I did once fantasize when I [back] in 1978, that perhaps +if we achieved that kind of thing, that they would be wise, and gentle +and kind, and realize that humans are an inferior species that, you +know, have some good features, so they'd keep us in some kind of +secluded…restrictive kind of environment, keep us away from +dangerous weapons, and so on. And find ways of cohabitating with +us. But that's just fantasy. +

+

+Adam Ford: Awesome. Yeah, there's an interesting story With Folded Hands where [the computers] want to take care of us and want to +reduce suffering and end up lobotomizing everybody [but] keeping them +alive so as to reduce the suffering. +

+

+Aaron Sloman: Not all that different from Brave New World, where it +was done with drugs and so on, but different humans are given +different roles in that system, yeah. +

+

+There's also The Time Machine, H.G. Wells, where the … in the +distant future, humans have split in two: the Eloi, I think they were +called, they lived underground, they were the [] ones, and then—no, +the Morlocks lived underground; Eloi lived on the planet; they were +pleasant and pretty but not very bright, and so on, and they were fed +on by … +

+

+Adam Ford: [] in the future. +

+

+Aaron Sloman: As I was saying, if you ask science fiction writers, +you'll probably come up with a wide variety of interesting answers. +

+

+Adam Ford: I certainly have; I've spoken to [] of Birmingham, and +Sean Williams, … who else? +

+

+Aaron Sloman: Did you ever read a story by E.M. Forrester called The Machine Stops — very short story, it's on the Internet somewhere +— it's about a time when people sitting … and this was written in +about [1914 ] so it's about…over a hundred years ago … people are +in their rooms, they sit in front of screens, and they type things, +and they communicate with one another that way, and they don't meet; +they have debates, and they give lectures to their audiences that way, +and then there's a woman whose son says “I'd like to see +you” and she says “What's the point? You've got me at +this point ” but he wants to come and talk to her — I won't +tell you how it ends, but. +

+

+Adam Ford: Reminds me of the Internet. +

+

+Aaron Sloman: Well, yes; he invented … it was just extraordinary +that he was able to do that, before most of the components that we +need for it existed. +

+

+Adam Ford: [Another person who did that] was Vernor Vinge [] True Names. +

+

+Aaron Sloman: When was that written? +

+

+Adam Ford: The seventies. +

+

+Aaron Sloman: Okay, well a lot of the technology was already around +then. The original bits of internet were working, in about 1973, I was +sitting … 1974, I was sitting at Sussex University trying to +use…learn LOGO, the programming language, to decide whether it was +going to be useful for teaching AI, and I was sitting [] paper +teletype, there was paper coming out, transmitting ten characters a +second from Sussex to UCL computer lab by telegraph cable, from there +to somewhere in Norway via another cable, from there by satellite to +California to a computer Xerox [] research center where they had +implemented a computer with a LOGO system on it, with someone I had +met previously in Edinburgh, Danny Bobrow, and he allowed me to have +access to this sytem. So there I was typing. And furthermore, it was +duplex typing, so every character I typed didn't show up on my +terminal until it had gone all the way there and echoed back, so I +would type, and the characters would come back four seconds later. +

+

+[55:26] But that was the Internet, and I think Vernor Vinge was +writing after that kind of thing had already started, but I don't +know. Anyway. +

+

+[55:41] Another…I mentioned H.G. Wells, The Time Machine. I +recently discovered, because David Lodge had written a sort of +semi-novel about him, that he had invented Wikipedia, in advance— he +had this notion of an encyclopedia that was free to everybody, and +everybody could contribute and [collaborate on it]. So, go to the +science fiction writers to find out the future — well, a range of +possible futures. +

+

+Adam Ford: Well the thing is with science fiction writers, they have +to maintain some sort of interest for their readers, after all the +science fiction which reaches us is the stuff that publishers want to +sell, and so there's a little bit of a … a bias towards making a +plot device there, and so the dramatic sort of appeals to our +amygdala, our lizard brain; we'll sort of stay there obviously to some +extent. But I think that they do come up with sort of amazing ideas; I +think it's worth trying to make these predictions; I think that we +should more time on strategic forecasting, I mean take that seriously. +

+

+Aaron Sloman: Well, I'm happy to leave that to others; I just want to +try to understand these problems that bother me about how things +work. And it may be that some would say that's irresponsible if I +don't think about what the implications will be. Well, understanding +how humans work might enable us to make [] humans — I suspect it +wont happen in this century; I think it's going to be too difficult. +

+
+
+ +
+

Date: 2013-10-04 18:49:53 UTC

+

Author: Dylan Holmes

+

Org version 7.7 with Emacs version 23

+Validate XHTML 1.0 + +
+ + diff -r 174455d6d0ba -r 414a10d51d9f org/sloman.org --- a/org/sloman.org Tue Jun 03 13:23:23 2014 -0400 +++ b/org/sloman.org Tue Jun 03 13:24:58 2014 -0400 @@ -4,22 +4,19 @@ #+STYLE: +#+begin_quote +*Update* (13 Oct): Aaron Sloman has produced an improved version of + this transcript, which includes follow-up thoughts and links to + related works. It is available on his website here: + [[http://www.cs.bham.ac.uk/research/projects/cogaff/movies/transcript-interview.html]]. + + +This draft will remain available here for historical purposes. +#+end_quote + #+BEGIN_QUOTE - - - - - - - - - - - - - *Editor's note:* This is a working draft transcript which I made of [[http://www.youtube.com/watch?feature=player_detailpage&v=iuH8dC7Snno][this nice interview]] of Aaron Sloman. Having just finished one iteration of transcription, I still need to go in and clean up the @@ -30,6 +27,11 @@ copies of this transcript for your own purposes. Also, feel free to e-mail me with comments or corrections. +(Addendum: This transcription is licensed by Aaron Sloman and Dylan Holmes, as + indicated here: + http://www.cs.bham.ac.uk/research/projects/cogaff/movies/transcript-interview.html#license) + + You can send mail to =transcript@aurellem.org=. Cheers, @@ -39,6 +41,7 @@ + * Introduction ** Aaron Sloman evolves into a philosopher of AI