diff categorical/imperative.org @ 2:b4de894a1e2e

initial import
author Robert McIntyre <rlm@mit.edu>
date Fri, 28 Oct 2011 00:03:05 -0700
parents
children
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/categorical/imperative.org	Fri Oct 28 00:03:05 2011 -0700
     1.3 @@ -0,0 +1,772 @@
     1.4 +#+TITLE: LAMBDA: The Ultimate Imperative
     1.5 +#+AUTHOR: Guy Lewis Steele Jr. and Gerald Jay Sussman
     1.6 +
     1.7 +* Abstract 
     1.8 +
     1.9 +We demonstrate how to model the following common programming constructs in
    1.10 +terms of an applicative order language similar to LISP:
    1.11 +- Simple Recursion
    1.12 +- Iteration 
    1.13 +- Compound Statements and Expressions
    1.14 +- GO TO and Assignment
    1.15 +- Continuation—Passing
    1.16 +- Escape Expressions
    1.17 +- Fluid Variables
    1.18 +- Call by Name, Call by Need, and Call by Reference
    1.19 +The models require only (possibly self-referent) lambda application,
    1.20 +conditionals, and (rarely) assignment. No complex data structures such as
    1.21 +stacks are used. The models are transparent, involving only local syntactic
    1.22 +transformations.
    1.23 +Some of these models, such as those for GO TO and assignment, are already well
    1.24 +known, and appear in the work of Landin, Reynolds, and others. The models for
    1.25 +escape expressions, fluid variables, and call by need with side effects are
    1.26 +new. This paper is partly tutorial in intent, gathering all the models
    1.27 +together for purposes of context.
    1.28 +This report describes research done at the Artificial Intelligence Laboratory
    1.29 +of the Massachusetts Institute of Teehnology. Support for the laboratory's
    1.30 +artificial intelligence research is provided in part by the Advanced Research
    1.31 +Projects Agency of the Department of Defense under Office of Naval Research
    1.32 +contract N000l4-75-C-0643.
    1.33 +
    1.34 +* Introduction
    1.35 +
    1.36 + We catalogue a number of common programming constructs. For each
    1.37 +construct we examine "typical" usage in well-known programming languages, and
    1.38 +then capture the essence of the semantics of the construct in terms of a
    1.39 +common meta-language.
    1.40 +The lambda calculus {Note Alonzowins} is often used as such a meta-
    1.41 +language. Lambda calculus offers clean semantics, but it is clumsy because it
    1.42 +was designed to be a minimal language rather than a convenient one. All
    1.43 +lambda calculus "functions" must take exactly one "argument"; the only "data
    1.44 +type" is lambda expressions; and the only "primitive operation‘ is variable‘
    1.45 +substitution. While its utter simplicity makes lambda calculus ideal for
    1.46 +logicians, it is too primitive for use by programmers. The meta-language we
    1.47 +use is a programming language called SCHEME {Note Schemepaper) which is based
    1.48 +on lambda calculus.
    1.49 +SCHEME is a dialect of LISP. [McCarthy 62] It is an expression-
    1.50 +oriented, applicative order, interpreter-based language which allows one to
    1.51 +manipulate programs as data. It differs from most current dialects of LISP in
    1.52 +that it closes all lambda expressions in the environment of their definition
    1.53 +or declaration, rather than in the execution environment. {Note Closures}
    1.54 +This preserves the substitution semantics of lambda calculus, and has the
    1.55 +consequence that all variables are lexically scoped, as in ALGOL. [Naur 63]
    1.56 +Another difference is that SCHEME is implemented in such a way that tail-
    1.57 +recursions execute without net growth of the interpreter stack. {Note
    1.58 +Schemenote} We have chosen to use LISP syntax rather than, say, ALGOL syntax
    1.59 +because we want to treat programs as data for the purpose of describing
    1.60 +transformations on the code. LISP supplies names for the parts of an
    1.61 +executable expression and standard operators for constructing expressions and
    1.62 +extracting their components. The use of LISP syntax makes the structure of
    1.63 +such expressions manifest. We use ALGOL as an expository language, because it
    1.64 +is familiar to many people, but ALGOL is not sufficiently powerful to express
    1.65 +the necessary concepts; in particular, it does not allow functions to return
    1.66 +functions as values. we are thus forced to use a dialect of LISP in many
    1.67 +cases.
    1.68 +We will consider various complex programming language constructs and
    1.69 +show how to model them in terms of only a few simple ones. As far as possible
    1.70 +we will use only three control constructs from SCHEME: LAMBDA expressions, as
    1.71 +in LISP, which are Just functions with lexically scoped free variables;
    1.72 +LABELS, which allows declaration of mutually recursive procedures (Note
    1.73 +Labelsdef}; and IF, a primitive conditional expression. For more complex
    1.74 +modelling we will introduce an assignment primitive (ASET).i We will freely
    1.75 +assume the existence of other comon primitives, such as arithmetic functions.
    1.76 +The constructs we will examine are divided into four broad classes.
    1.77 +The first is sfinph?Lo0pl; this contains simple recursions and iterations, and
    1.78 +an introduction to the notion of continuations. The second is hmponflivo
    1.79 +Connrucls; this includes compound statements, G0 T0, and simple variable
    1.80 +assignments. The third is continuations, which encompasses the distinction between statements and expressions, escape operators (such as Landin's J-
    1.81 +operator [Landin 65] and Reynold's escape expression [Reynolds 72]). and fluid
    1.82 +(dynamically bound) variables. The fourth is Parameter Passing Mechanisms, such
    1.83 +as ALGOL call—by-name and FORTRAN call-by-location.
    1.84 +Some of the models presented here are already well-known, particularly
    1.85 +those for G0 T0 and assignment. [McCarthy 60] [Landin 65] [Reynolds 72]
    1.86 +Those for escape operators, fluid variables, and call-by-need with side
    1.87 +effects are new.
    1.88 +** Simple Loops
    1.89 +By /simple loops/ we mean constructs which enable programs to execute the
    1.90 +same piece of code repeatedly in a controlled manner. Variables may be made
    1.91 +to take on different values during each repetition, and the number of
    1.92 +repetitions may depend on data given to the program.
    1.93 +*** Simple Recursion
    1.94 +One of the easiest ways to produce a looping control structure is to
    1.95 +use a recursive function, one which calls itself to perform a subcomputation.
    1.96 +For example, the familiar factorial function may be written recursively in
    1.97 +ALGOL: '
    1.98 +#+begin_src algol
    1.99 +integer procedure fact(n) value n: integer n:
   1.100 +fact := if n=0 then 1 else n*fact(n-1);
   1.101 +#+end_src
   1.102 +
   1.103 +The invocation =fact(n)= computes the product of the integers from 1 to n using
   1.104 +the identity n!=n(n-1)! (n>0). If $n$ is zero, 1 is returned; otherwise =fact=:
   1.105 +calls itself recursively to compute $(n-1)!$ , then multiplies the result by $n$
   1.106 +and returns it.
   1.107 +
   1.108 +This same function may be written in Clojure as follows:
   1.109 +#+begin_src clojure
   1.110 +(ns categorical.imperative)
   1.111 +(defn fact[n]
   1.112 +  (if (= n 0) 1 (* n (fact (dec n))))
   1.113 +)
   1.114 +#+end_src
   1.115 +
   1.116 +#+results:
   1.117 +: #'categorical.imperative/fact
   1.118 +
   1.119 +Clojure does not require an assignment to the ``variable'' fan: to return a value
   1.120 +as ALGOL does. The IF primitive is the ALGOL if-then-else rendered in LISP
   1.121 +syntax. Note that the arithmetic primitives are prefix operators in SCHEME.
   1.122 +
   1.123 +*** Iteration
   1.124 +There are many other ways to compute factorial. One important way is
   1.125 +through the use of /iteration/.
   1.126 +A comon iterative construct is the DO loop. The most general form we
   1.127 +have seen in any programming language is the MacLISP DO [Moon 74]. It permits
   1.128 +the simultaneous initialization of any number of control variables and the
   1.129 +simultaneous stepping of these variables by arbitrary functions at each
   1.130 +iteration step. The loop is terminated by an arbitrary predicate, and an
   1.131 +arbitrary value may be returned. The DO loop may have a body, a series of
   1.132 +expressions executed for effect on each iteration. A version of the MacLISP
   1.133 +DO construct has been adopted in SCHEME.
   1.134 +
   1.135 +The general form of a SCHEME DO is:
   1.136 +#+begin_src nothing
   1.137 +(DO ((<var1> (init1> <stepl>)
   1.138 +((var2> <init2> <step2>)
   1.139 +(<varn> <intn> (stepn>))
   1.140 +(<pred> (value>)
   1.141 +(optional body>)
   1.142 +#+end_src
   1.143 +The semantics of this are that the variables are bound and initialized to the
   1.144 +values of the <initi> expressions, which must all be evaluated in the
   1.145 +environment outside the D0; then the predicate <pred> is evaluated in the new
   1.146 +environment, and if TRUE, the (value) is evaluated and returned. Otherwise
   1.147 +the (optional body) is evaluated, then each of the steppers <stepi> is
   1.148 +evaluated in the current environment, all the variables made to have the
   1.149 +results as their values, the predicate evaluated again, and so on.
   1.150 +Using D0 loops in both ALGOL and SCHEME, we may express FACT by means
   1.151 +of iteration.
   1.152 +#+begin_src algol
   1.153 +integer procedure fact(n); value n; integer n:
   1.154 +begin
   1.155 +integer m, ans;
   1.156 +ans := 1;
   1.157 +for m := n step -l until 0 do ans := m*ans;
   1.158 +fact := ans;
   1.159 +end;
   1.160 +#+end_src
   1.161 +
   1.162 +#+begin_src clojure
   1.163 +(in-ns 'categorical.imperative)
   1.164 +(defn fact-do[n]
   1.165 +)
   1.166 +#+end_src
   1.167 +
   1.168 +Note that the SCHEME D0 loop in FACT has no body -- the stepping functions do
   1.169 +all the work. The ALGOL DO loop has an assignment in its body; because an
   1.170 +ALGOL DO loop can step only one variable, we need the assignment to step the
   1.171 +the variable "manually'.
   1.172 +In reality the SCHEME DO construct is not a primitive; it is a macro
   1.173 +which expands into a function which performs the iteration by tail—recursion.
   1.174 +Consider the following definition of FACT in SCHEME. Although it appears to
   1.175 +be recursive, since it "calls itself", it is entirely equivalent to the DO
   1.176 +loop given above, for it is the code that the DO macro expands into! It
   1.177 +captures the essence of our intuitive notion of iteration, because execution
   1.178 +of this program will not produce internal structures (e.g. stacks or variable
   1.179 +bindings) which increase in size with the number of iteration steps.
   1.180 +
   1.181 +#+begin_src clojure
   1.182 +(in-ns 'categorical.imperative)
   1.183 +(defn fact-do-expand[n]
   1.184 +  (let [fact1
   1.185 +	(fn[m ans]
   1.186 +	  (if (zero? m) ans
   1.187 +	      (recur (dec m) (* m ans))))]
   1.188 +    (fact1 n 1)))
   1.189 +#+end_src
   1.190 +
   1.191 +From this we can infer a general way to express iterations in SCHEME in
   1.192 +a manner isomorphic to the HacLISP D0. The expansion of the general D0 loop
   1.193 +#+begin_src nothing
   1.194 +(DO ((<varl> <init1> <step1>)
   1.195 +(<var2> (init2) <step2>)
   1.196 +...
   1.197 +(<varn> <initn> <stepn>))
   1.198 +(<pred> <va1ue>)
   1.199 +<body>)
   1.200 +#+end_src
   1.201 +is this:
   1.202 +#+begin_src nothing
   1.203 +(let [doloop
   1.204 +(fn [dummy <var1> <var2> ... <varn>)
   1.205 +(if <pred> <value>
   1.206 +(recur <body> <step1> <step2> ... <stepn>)))]
   1.207 +
   1.208 +(doloop nil <init1> <init2> ... <initn>))
   1.209 +#+end_src
   1.210 +The identifiers =doloop= and =dummy= are chosen so as not to conflict with any
   1.211 +other identifiers in the program.
   1.212 +Note that, unlike most implementations of D0, there are no side effects
   1.213 +in the steppings of the iteration variables. D0 loops are usually modelled
   1.214 +using assignment statements. For example:
   1.215 +#+begin_src nothing
   1.216 +for r :1 0 step b until c do <statement>;
   1.217 +#+end_src
   1.218 +
   1.219 +can be modelled as follows: [Naur 63]
   1.220 +#+begin_src nothing
   1.221 +begin
   1.222 +x := a;
   1.223 +L: if (x-c)*sign(n) > 0 then go to Endloop;
   1.224 +<statement>;
   1.225 +x := x+b;
   1.226 +go to L:
   1.227 +Endloop:
   1.228 +end;
   1.229 +#+end_src
   1.230 +Later we will see how such assignment statements can in general be
   1.231 +modelled without using side effects.
   1.232 +
   1.233 +* Imperative Programming
   1.234 +Lambda calculus (and related languages, such as ``pure LISP'') is often
   1.235 +used for modelling the applicative constructs of programming languages.
   1.236 +However, they are generally thought of as inappropriate for modelling
   1.237 +imperative constructs. In this section we show how imperative constructs may
   1.238 +be modelled by applicative SCHEME constructs.
   1.239 +** Compound Statements
   1.240 +The simplest kind of imperative construct is the statement sequencer,
   1.241 +for example the compound statement in ALGOL:
   1.242 +#+begin_src algol
   1.243 +begin
   1.244 +S1;
   1.245 +S2;
   1.246 +end
   1.247 +#+end_src
   1.248 +
   1.249 +This construct has two interesting properties:
   1.250 +- (1) It performs statement S1 before S2, and so may be used for
   1.251 +  sequencing.
   1.252 +- (2) S1 is useful only for its side effects. (In ALGOL, S2 must also
   1.253 +  be a statement, and so is also useful only for side effects, but
   1.254 +  other languages have compound expressions containing a statement
   1.255 +  followed by an expression.)
   1.256 +
   1.257 +The ALGOL compound statement may actually contain any number of statements,
   1.258 +but such statements can be expressed as a series of nested two-statement
   1.259 +compounds. That is:
   1.260 +#+begin_src algol
   1.261 +begin
   1.262 +S1;
   1.263 +S2;
   1.264 +...
   1.265 +Sn-1;
   1.266 +Sn;
   1.267 +end
   1.268 +#+end_src
   1.269 +is equivalent to:
   1.270 +
   1.271 +#+begin_src algol
   1.272 +begin
   1.273 +S1;
   1.274 +begin
   1.275 +S2;
   1.276 +begin
   1.277 +...
   1.278 +
   1.279 +begin
   1.280 +Sn-1;
   1.281 +Sn;
   1.282 +end;
   1.283 +end;
   1.284 +end:
   1.285 +end
   1.286 +#+end_src
   1.287 +
   1.288 +It is not immediately apparent that this sequencing can be expressed in a
   1.289 +purely applicative language. We can, however, take advantage of the implicit
   1.290 +sequencing of applicative order evaluation. Thus, for example, we may write a
   1.291 +two-statement sequence as follows:
   1.292 +
   1.293 +#+begin_src clojure
   1.294 +((fn[dummy] S2) S1)
   1.295 +#+end_src
   1.296 +
   1.297 +where =dummy= is an identifier not used in S2. From this it is
   1.298 +manifest that the value of S1 is ignored, and so is useful only for
   1.299 +side effects. (Note that we did not claim that S1 is expressed in a
   1.300 +purely applicative language, but only that the sequencing can be so
   1.301 +expressed.) From now on we will use the form =(BLOCK S1 S2)= as an
   1.302 +abbreviation for this expression, and =(BLOCK S1 S2...Sn-1 Sn)= as an
   1.303 +abbreviation for 
   1.304 +
   1.305 +#+begin_src algol
   1.306 +(BLOCK S1 (BLOCK S2 (BLOCK ... (BLOCK Sn-1 Sn)...))) 
   1.307 +#+end_src
   1.308 +
   1.309 +** 2.2. The G0 TO Statement 
   1.310 +
   1.311 +A more general imperative structure is the compound statement with
   1.312 +labels and G0 T0s. Consider the following code fragment due to
   1.313 +Jacopini, taken from Knuth: [Knuth 74] 
   1.314 +#+begin_src algol
   1.315 +begin 
   1.316 +L1: if B1 then go to L2; S1;
   1.317 +    if B2 then go to L2; S2;
   1.318 +    go to L1;
   1.319 +L2: S3;
   1.320 +#+end_src
   1.321 +
   1.322 +It is perhaps surprising that this piece of code can be /syntactically/
   1.323 +transformed into a purely applicative style. For example, in SCHEME we
   1.324 +could write:
   1.325 +
   1.326 +#+begin_src clojure
   1.327 +(in-ns 'categorical.imperative)
   1.328 +(let
   1.329 +    [L1 (fn[]
   1.330 +	  (if B1 (L2) (do S1
   1.331 +			  (if B2 (L2) (do S2 (L1))))))
   1.332 +     L2 (fn[] S3)]
   1.333 +  (L1))
   1.334 +#+end_src
   1.335 +
   1.336 +As with the D0 loop, this transformation depends critically on SCHEME's
   1.337 +treatment of tail-recursion and on lexical scoping of variables. The labels
   1.338 +are names of functions of no arguments. In order to ‘go to" the labeled code,
   1.339 +we merely call the function named by that label.
   1.340 +
   1.341 +** 2.3. Simple Assignment
   1.342 +Of course, all this sequencing of statements is useless unless the
   1.343 +statements have side effects. An important side effect is assignment. For
   1.344 +example, one often uses assignment to place intermediate results in a named
   1.345 +location (i.e. a variable) so that they may be used more than once later
   1.346 +without recomputing them:
   1.347 +
   1.348 +#+begin_src algol
   1.349 +begin
   1.350 +real a2, sqrtdisc;
   1.351 +a2 := 2*a;
   1.352 +sqrtdisc := sqrt(b^2 - 4*a*c);
   1.353 +root1 := (- b + sqrtdisc) / a2;
   1.354 +root2 := (- b - sqrtdisc) / a2;
   1.355 +print(root1);
   1.356 +print(root2);
   1.357 +print(root1 + root2);
   1.358 +end
   1.359 +#+end_src
   1.360 +
   1.361 +It is well known that such naming of intermediate results may be accomplished
   1.362 +by calling a function and binding the formal parameter variables to the
   1.363 +results: 
   1.364 +
   1.365 +#+begin_src clojure
   1.366 +(fn [a2 sqrtdisc]
   1.367 +  ((fn[root1 root2]
   1.368 +    (do (println root1)
   1.369 +	(println root2)
   1.370 +	(println (+ root1 root2))))
   1.371 +  (/ (+ (- b) sqrtdisc) a2)
   1.372 +  (/ (- (- b) sqrtdisc) a2))
   1.373 +
   1.374 +  (* 2 a)
   1.375 +  (sqrt (- (* b b) (* 4 a c))))
   1.376 +#+end_src
   1.377 +This technique can be extended to handle all simple variable assignments which
   1.378 +appear as statements in blocks, even if arbitrary G0 T0 statements also appear
   1.379 +in such blocks. {Note Mccarthywins}
   1.380 +
   1.381 +For example, here is a program which uses G0 TO statements in the form
   1.382 +presented before; it determines the parity of a non-negative integer by
   1.383 +counting it down until it reaches zero.
   1.384 +
   1.385 +#+begin_src algol
   1.386 +begin
   1.387 +L1: if a = 0 then begin parity := 0; go to L2; end;
   1.388 +    a := a - 1;
   1.389 +    if a = 0 then begin parity := 1; go to L2; end;
   1.390 +    a := a - 1;
   1.391 +    go to L1;
   1.392 +L2: print(parity);
   1.393 +#+end_src
   1.394 +
   1.395 +This can be expressed in SCHEME:
   1.396 +
   1.397 +#+begin_src clojure
   1.398 +(let
   1.399 +    [L1 (fn [a parity]
   1.400 +	  (if (zero? a) (L2 a 0)
   1.401 +	      (L3 (dec a) parity)))
   1.402 +     L3 (fn [a parity]
   1.403 +	  (if (zero? a) (L2 a 1)
   1.404 +	      (L1 (dec a) parity)))
   1.405 +     L2 (fn [a parity]
   1.406 +	  (println parity))]
   1.407 +  (L1 a parity))
   1.408 +#+end_src
   1.409 +
   1.410 +The trick is to pass the set of variables which may be altered as arguments to
   1.411 +the label functions. {Note Flowgraph} It may be necessary to introduce new
   1.412 +labels (such as L3) so that an assignment may be transformed into the binding
   1.413 +for a function call. At worst, one may need as many labels as there are
   1.414 +statements (not counting the eliminated assignment and GO TO statements).
   1.415 +
   1.416 +** Compound Expressions '
   1.417 +At this point we are almost in a position to model the most general
   1.418 +form of compound statement. In LISP, this is called the 'PROG feature". In
   1.419 +addition to statement sequencing and G0 T0 statements, one can return a /value/
   1.420 +from a PROG by using the RETURN statement.
   1.421 +
   1.422 +Let us first consider the simplest compound statement, which in SCHEME
   1.423 +we call BLOCK. Recall that
   1.424 +=(BLOCK s1 sz)= is defined to be =((lambda (dummy) s2) s1)=
   1.425 +
   1.426 +Notice that this not only performs Sl before S2, but also returns the value of
   1.427 +S2. Furthermore, we defined =(BLOCK S1 S2 ... Sn)= so that it returns the value
   1.428 +of =Sn=. Thus BLOCK may be used as a compound expression, and models a LISP
   1.429 +PROGN, which is a PROG with no G0 T0 statements and an implicit RETURN of the
   1.430 +last ``statement'' (really an expression).
   1.431 +
   1.432 +Host LISP compilers compile D0 expressions by macro-expansion. We have
   1.433 +already seen one way to do this in SCHEME using only variable binding. A more
   1.434 +common technique is to expand the D0 into a PROG, using variable assignments
   1.435 +instead of bindings. Thus the iterative factorial program:
   1.436 +
   1.437 +#+begin_src clojure
   1.438 +(oarxnz FACT .
   1.439 +(LAMaoA (n) .
   1.440 +(D0 ((M N (- H 1))
   1.441 +(Ans 1 (* M Ans)))
   1.442 +((- H 0) A"$))))
   1.443 +#+end_src
   1.444 +
   1.445 +would expand into:
   1.446 +
   1.447 +#+begin_src clojure
   1.448 +(DEFINE FACT
   1.449 +. (LAMeoA (M)
   1.450 +(PRO6 (M Ans)
   1.451 +(sszro M n
   1.452 +Ans 1) ~
   1.453 +LP (tr (- M 0) (RETURN Ans))
   1.454 +(ssero M (- n 1)
   1.455 +Ans (' M Ans))
   1.456 +(60 LP))))
   1.457 +#+end_src
   1.458 +
   1.459 +where SSETQ is a simultaneous multiple assignment operator. (SSETQ is not a
   1.460 +SCHEME (or LISP) primitive. It can be defined in terms of a single assignment
   1.461 +operator, but we are more interested here in RETURN than in simultaneous
   1.462 +assignment. The SSETQ's will all be removed anyway and modelled by lambda
   1.463 +binding.) We can apply the same technique we used before to eliminate G0 T0
   1.464 +statements and assignments from compound statements:
   1.465 +
   1.466 +#+begin_src clojure
   1.467 +(DEFINE FACT
   1.468 +(LAHBOA (I)
   1.469 +(LABELS ((L1 (LAManA (M Ans)
   1.470 +(LP n 1)))
   1.471 +(LP (LAMaoA (M Ans)
   1.472 +(IF (- M o) (nztunn Ans)
   1.473 +(£2 H An$))))
   1.474 +(L2 (LAMaoA (M An )
   1.475 +(LP (- " 1) (' H flN$)))))
   1.476 +(L1 NIL NlL))))
   1.477 +#+end_src clojure
   1.478 +
   1.479 +We still haven't done anything about RETURN. Let's see...
   1.480 +- ==> the value of (FACT 0) is the value of (Ll NIL NIL)
   1.481 +- ==> which is the value of (LP 0 1)
   1.482 +- ==> which is the value of (IF (= 0 0) (RETURN 1) (L2 0 1))
   1.483 +- ==> which is the value of (RETURN 1) 
   1.484 +
   1.485 +Notice that if RETURN were the /identity
   1.486 +function/ (LAMBDA (X) X), we would get the right answer. This is in fact a
   1.487 +general truth: if we Just replace a call to RETURN with its argument, then
   1.488 +our old transformation on compound statements extends to general compound
   1.489 +expressions, i.e. PROG.
   1.490 +
   1.491 +* Continuations
   1.492 +Up to now we have thought of SCHEME's LAMBDA expressions as functions,
   1.493 +and of a combination such as =(G (F X Y))= as meaning ``apply the function F to
   1.494 +the values of X and Y, and return a value so that the function G can be
   1.495 +applied and return a value ...'' But notice that we have seldom used LAMBDA
   1.496 +expressions as functions. Rather, we have used them as control structures and
   1.497 +environment modifiers. For example, consider the expression:
   1.498 +=(BLOCK (PRINT 3) (PRINT 4))=
   1.499 +
   1.500 +This is defined to be an abbreviation for:
   1.501 +=((LAMBDA  (DUMMY) (PRINT 4)) (PRINT 3))=
   1.502 +
   1.503 +We do not care about the value of this BLOCK expression; it follows that we
   1.504 +do not care about the value of the =(LAMBDA (DUMMY) ...)=. We are not using
   1.505 +LAMBDA as a function at all.
   1.506 +
   1.507 +It is possible to write useful programs in terms of LAHBDA expressions
   1.508 +in which we never care about the value of /any/ lambda expression. We have
   1.509 +already demonstrated how one could represent any "FORTRAN" program in these
   1.510 +terms: all one needs is PROG (with G0 and SETQ), and PRINT to get the answers
   1.511 +out. The ultimate generalization of this imperative programing style is
   1.512 +/continuation-passing/. (Note Churchwins} .
   1.513 +
   1.514 +** Continuation-Passing Recursion
   1.515 +Consider the following alternative definition of FACT. It has an extra
   1.516 +argument, the continuation, which is a function to call with the answer, when
   1.517 +we have it, rather than return a value
   1.518 +
   1.519 +#+begin_src algol.
   1.520 +procedure Inc!(n, c); value n, c;
   1.521 +integer n: procedure c(integer value);
   1.522 +if n-=0 then c(l) else
   1.523 +begin
   1.524 +procedure l(!mp(d) value a: integer a;
   1.525 +c(mm);
   1.526 +Iacl(n-1, romp):
   1.527 +end;
   1.528 +#+end_src
   1.529 +
   1.530 +#+begin_src clojure
   1.531 +(in-ns 'categorical.imperative)
   1.532 +(defn fact-cps[n k]
   1.533 +  (if (zero? n) (k 1)
   1.534 +      (recur (dec n) (fn[a] (k (* n a))))))
   1.535 +#+end_src clojure
   1.536 +It is fairly clumsy to use this version of‘ FACT in ALGOL; it is necessary to
   1.537 +do something like this:
   1.538 +
   1.539 +#+begin_src algol
   1.540 +begin
   1.541 +integer ann
   1.542 +procedure :emp(x); value 2:; integer x;
   1.543 +ans :1 x;
   1.544 +]'act(3. temp);
   1.545 +comment Now the variable "am" has 6;
   1.546 +end;
   1.547 +#+end_src
   1.548 +
   1.549 +Procedure =fact= does not return a value, nor does =temp=; we must use a side
   1.550 +effect to get the answer out.
   1.551 +
   1.552 +=FACT= is somewhat easier to use in SCHEME. We can call it like an
   1.553 +ordinary function in SCHEME if we supply an identity function as the second
   1.554 +argument. For example, =(FACT 3 (LAMBDA (X) X))= returns 6. Alternatively, we
   1.555 +could write =(FACT 3 (LAHBDA (X) (PRINT X)))=; we do not care about the value
   1.556 +of this, but about what gets printed. A third way to use the value is to
   1.557 +write
   1.558 +=(FACT 3 (LAMBDA (x) (SQRT x)))=
   1.559 +instead of
   1.560 +=(SQRT (FACT 3 (LAMBDA (x) x)))=
   1.561 +
   1.562 +In either of these cases we care about the value of the continuation given to
   1.563 +FACT. Thus we care about the value of FACT if and only if we care about the
   1.564 +value of its continuation!
   1.565 +
   1.566 +We can redefine other functions to take continuations in the same way.
   1.567 +For example, suppose we had arithmetic primitives which took continuations; to
   1.568 +prevent confusion, call the version of "+" which takes a continuation '++",
   1.569 +etc. Instead of writing
   1.570 +=(- (+ B Z)(* 4 A C))=
   1.571 +we can write
   1.572 +#+begin_src clojure
   1.573 +(in-ns 'categorical.imperative)
   1.574 +(defn enable-continuation "Makes f take a continuation as an additional argument."[f]
   1.575 +  (fn[& args] ((fn[k] (k (apply f (reverse (rest (reverse args)))))) (last args)) ))
   1.576 +(def +& (enable-continuation +))
   1.577 +(def *& (enable-continuation *))
   1.578 +(def -& (enable-continuation -))
   1.579 +
   1.580 +
   1.581 +(defn quadratic[a b c k]
   1.582 +(*& b b
   1.583 +    (fn[x] (*& 4 a c
   1.584 +	       (fn[y]
   1.585 +		 (-& x y k))))))
   1.586 +#+end_src
   1.587 +
   1.588 +where =k= is the continuation for the entire expression.
   1.589 +
   1.590 +This is an obscure way to write an algebraic expression, and we would
   1.591 +not advise writing code this way in general, but continuation-passing brings
   1.592 +out certain important features of the computation:
   1.593 + - [1] The operations to be performed appear in the order in which they are
   1.594 +performed. In fact, they /must/ be performed in this
   1.595 +order. Continuation- passing removes the need for the rule about
   1.596 +left-to-right argument evaluation{Note Evalorder)
   1.597 +- [2] In the usual applicative expression there are two implicit
   1.598 +  temporary values: those of =(* B B)= and =(* 4 A C)=. The first of
   1.599 +  these values must be preserved over the computation of the second,
   1.600 +  whereas the second is used as soon as it is produced. These facts
   1.601 +  are /manifest/ in the appearance and use of the variable X and Y in
   1.602 +  the continuation-passing version.  
   1.603 +
   1.604 +In short, the continuation-passing version specifies /exactly/ and
   1.605 +  explicitly what steps are necessary to compute the value of the
   1.606 +  expression.  One can think of conventional functional application
   1.607 +  for value as being an abbreviation for the more explicit
   1.608 +  continuation-passing style. Alternatively, one can think of the
   1.609 +  interpreter as supplying to each function an implicit default
   1.610 +  continuation of one argument. This continuation will receive the
   1.611 +  value of the function as its argument, and then carry on the
   1.612 +  computation. In an interpreter this implicit continuation is
   1.613 +  represented by the control stack mechanism for function returns.
   1.614 +  Now consider what computational steps are implied by: 
   1.615 +
   1.616 +=(LAMBDA (A B C ...) (F X Y Z ...))= when we "apply" the LAMBDA
   1.617 +  expression we have some values to apply it to; we let the names A,
   1.618 +  B, C ... refer to these values. We then determine the values of X,
   1.619 +  Y, Z ... and pass these values (along with "the buck",
   1.620 +  i.e. control!) to the lambda expression F (F is either a lambda
   1.621 +  expression or a name for one).  Passing control to F is an
   1.622 +  unconditional transfer. (Note Jrsthack) {Note Hewitthack) Note that
   1.623 +  we want values from X, Y, Z, ... If these are simple expressions,
   1.624 +  such as variables, constants, or LAMBDA expressions, the evaluation
   1.625 +  process is trivial, in that no temporary storage is required. In
   1.626 +  pure continuation-passing style, all evaluations are trivial: no
   1.627 +  combination is nested within another, and therefore no ‘hidden
   1.628 +  temporaries" are required.  But if X is a combination, say (G P Q),
   1.629 +  then we want to think of G as a function, because we want a value
   1.630 +  from it, and we will need an implicit temporary place to keep the
   1.631 +  result while evaluating Y and Z. (An interpreter usually keeps these
   1.632 +  temporary places in the control stack!) On the other hand, we do not
   1.633 +  necessarily need a value from F. This is what we mean by tail-
   1.634 +  recursion: F is called tail-recursively, whereas G is not. A better
   1.635 +  name for tail-recursion would be "tail-transfer", since no real
   1.636 +  recursion is implied.  This is why we have made such a fuss about
   1.637 +  tail-recursion: it can be used for transfer of control without
   1.638 +  making any commitment about whether the expression expected to
   1.639 +  return a value. Thus it can be used to model statement-like control
   1.640 +  structures. Put another way, tail—recursion does not require a
   1.641 +  control stack as nested recursion does. In our models of iteration
   1.642 +  and imperative style all the LAMBDA expressions used for control (to
   1.643 +  simulate GO statements, for example) are called in tail-recursive
   1.644 +  fashion.
   1.645 +
   1.646 +** Escape Expressions
   1.647 +Reynolds [Reynolds 72] defines the construction
   1.648 += escape x in r =
   1.649 +
   1.650 +to evaluate the expression $r$ in an environment such that the variable $x$ is
   1.651 +bound to an escape function. If the escape function is never applied, then
   1.652 +the value of the escape expression is the value of $r$. If the escape function
   1.653 +is applied to an argument $a$, however, then evaluation of $r$ is aborted and the
   1.654 +escape expression returns $a$. {Note J-operator} (Reynolds points out that
   1.655 +this definition is not quite accurate, since the escape function may be called
   1.656 +even after the escape expression has returned a value; if this happens, it
   1.657 +“returns again"!)
   1.658 +
   1.659 +As an example of the use of an escape expression, consider this
   1.660 +procedure to compute the harmonic mean of an array of numbers. If any of the
   1.661 +numbers is zero, we want the answer to be zero. We have a function hannaunl
   1.662 +which will sum the reciprocals of numbers in an array, or call an escape
   1.663 +function with zero if any of the numbers is zero. (The implementation shown
   1.664 +here is awkward because ALGOL requires that a function return its value by
   1.665 +assignment.)
   1.666 +#+begin_src algol
   1.667 +begin
   1.668 +real procedure Imrmsum(a, n. escfun)§ p
   1.669 +real array at integer n; real procedure esciun(real);
   1.670 +begin
   1.671 +real sum;
   1.672 +sum := 0;
   1.673 +for i :1 0 until n-l do
   1.674 +begin
   1.675 +if a[i]=0 then cscfun(0);
   1.676 +sum :1 sum ¢ I/a[i];
   1.677 +enm
   1.678 +harmsum SI sum;
   1.679 +end: .
   1.680 +real array b[0:99]:
   1.681 +print(escape x in I00/hm-m.mm(b, 100, x));
   1.682 +end
   1.683 +#+end_src
   1.684 +
   1.685 +If harmsum exits normally, the number of elements is divided by the sum and
   1.686 +printed. Otherwise, zero is returned from the escape expression and printed
   1.687 +without the division ever occurring.
   1.688 +This program can be written in SCHEME using the built-in escape
   1.689 +operator =CATCH=:
   1.690 +
   1.691 +#+begin_src clojure
   1.692 +(in-ns 'categorical.imperative)
   1.693 +(defn harmonic-sum[coll escape]
   1.694 +  ((fn [i sum]
   1.695 +     (cond (= i (count coll) ) sum
   1.696 +	   (zero? (nth coll i)) (escape 0)
   1.697 +	   :else (recur (inc i) (+ sum (/ 1 (nth coll i))))))
   1.698 +   0 0))
   1.699 +
   1.700 +#+end_src
   1.701 +
   1.702 +This actually works, but elucidates very little about the nature of ESCAPE.
   1.703 +We can eliminate the use of CATCH by using continuation-passing. Let us do
   1.704 +for HARMSUM what we did earlier for FACT. Let it take an extra argument C,
   1.705 +which is called as a function on the result.
   1.706 +
   1.707 +#+begin_src clojure
   1.708 +(in-ns 'categorical.imperative)
   1.709 +(defn harmonic-sum-escape[coll escape k]
   1.710 +  ((fn[i sum]
   1.711 +    (cond (= i (count coll)) (k sum)
   1.712 +	  (zero? (nth coll i)) (escape 0)
   1.713 +	  (recur (inc i) (+ sum (/ 1 (nth coll i))))))
   1.714 +   0 0))
   1.715 +
   1.716 +(let [B (range 0 100)
   1.717 +      after-the-catch println]
   1.718 +  (harmonic-sum-escape B after-the-catch (fn[y] (after-the-catch (/ 100 y)))))
   1.719 +
   1.720 +#+end_src
   1.721 +
   1.722 +Notice that if we use ESCFUN, then C does not get called. In this way the
   1.723 +division is avoided. This example shows how ESCFUN may be considered to be an
   1.724 +"alternate continuation".
   1.725 +
   1.726 +** Dynamic Variable Scoping
   1.727 +In this section we will consider the problem of dynamically scoped, or
   1.728 +"fluid", variables. These do not exist in ALGOL, but are typical of many LISP
   1.729 +implementations, ECL, and APL. He will see that fluid variables may be
   1.730 +modelled in more than one way, and that one of these is closely related to
   1.731 +continuation—pass1ng.
   1.732 +
   1.733 +*** Free (Global) Variables
   1.734 +Consider the following program to compute square roots:
   1.735 +
   1.736 +#+begin_src clojure
   1.737 +(defn sqrt[x tolerance]
   1.738 +  (
   1.739 +(DEFINE soar
   1.740 +(LAHBDA (x EPSILON)
   1.741 +(Pace (ANS)
   1.742 +(stro ANS 1.0)
   1.743 +A (coup ((< (ADS (-s x (~s ANS ANS))) EPSILON)
   1.744 +(nzruau ANS))) '
   1.745 +(sero ANS (//s (+5 x (//s x ANS)) 2.0))
   1.746 +(60 A)))) .
   1.747 +This function takes two arguments: the radicand and the numerical tolerance
   1.748 +for the approximation. Now suppose we want to write a program QUAD to compute
   1.749 +solutions to a quadratic equation; p
   1.750 +(perms QUAD
   1.751 +(LAHDDA (A a c)
   1.752 +((LAHBDA (A2 sonmsc) '
   1.753 +(LIST (/ (+ (- a) SQRTDISC) AZ)
   1.754 +(/ (- (- B) SORTOISC) AZ)))
   1.755 +(' 2 A)
   1.756 +- (SORT (- (9 D 2) (' 4 A C)) (tolerance>))))
   1.757 +#+end_src
   1.758 +It is not clear what to write for (tolerance). One alternative is to pick
   1.759 +some tolerance for use by QUAD and write it as a constant in the code. This
   1.760 +is undesirable because it makes QUAD inflexible and hard to change. _Another
   1.761 +is to make QUAD take an extra argument and pass it to SQRT:
   1.762 +(DEFINE QUAD
   1.763 +(LANODA (A 8 C EPSILON)
   1.764 +(soar ... EPSILON) ...))
   1.765 +This is undesirable because EPSILDN is not really part of the problem QUAD is
   1.766 +supposed to solve, and we don't want the user to have to provide it.
   1.767 +Furthermore, if QUAD were built into some larger function, and that into
   1.768 +another, all these functions would have to pass EPSILON down as an extra
   1.769 +argument. A third.possibi1ity would be to pass the SQRT function as an
   1.770 +argument to QUAD (don't laugh!), the theory being to bind EPSILON at the
   1.771 +appropriate level like this: A U‘ A
   1.772 +(QUAD 3 A 5 (LAMBDA (X) (SORT X <toIerance>)))
   1.773 +where we define QUAD as:
   1.774 +(DEFINE QUAD
   1.775 +(LAMBDA (A a c soar) ...))