Cooking with Lisp

Another blog about Lisp, the world's greatest programming language.

Tuesday, July 13, 2004

António Menezes Leitão's Recollections of Working with a Lisp Machine

This is another gem linked by Tayssir John Gabbour on the Paper page of ALU. Here, António describes his daily usage of a TI Explorer.
you had all the source code on your hands. You could see and modify the actual code used by the machine, compile it, and the machine would behave immediately according to your modifications. Your code was on exactly the same stand as the system code.
and
Again, I want to stress that you had all the code for this at your finger-tips. Really. Is nothing like having the sources of Linux. On the Explorers (and presumably on all the other Lisp Machines) when you wanted to see what was going on, you just need to hit the break key (or select a process from the processes inspector), look at the window debugger, point at some function, edit it, read its documentation, etc. You could also modify it, compile it, restart the computation from some point, etc. You could also look at the data-structures, inspect them, modify them, look at the class hierarchy, include some extra mixins to a class definition, compile them, restart, etc, etc.
And then, he has this great anecdote about being able to modify some Lisp program that he didn't have the source to:
I found a bug in one function of that tool. The bug occurred because the function was recursive (tail-recursive, actually) and we were using some very unusually deep class hierarchies and the stack blowed up. Note that there were no hard-limits on the stack size (AFAIK), but the system warned you when the stack reached some pre-defined limits. You could just tell the system to increase the stack size and continue the operation (really, it was just a sort of yes-or-no question, either abort or continue with a larger stack) but this was not very practical from the standpoint of the user of our software, so I decided to correct the bug. Meta-Control-w (If I remember correctly) and there I was looking at the marvelous window debugger, seeing the stack, the local variables, the objects, everything that I wanted. Note that the tool we were using _didn't_ come with sources. Of course, nothing prevented us to look at the disassembled code, so that's what I did. This disassembly was so clear, so documented, that I could generate a copy of the original function within minutes. Well, at least, it compiled to exactly the same code. With that copy, it was now very simple to convert it to an iterative loop, compile it again, restart it, and there it was, the knowledge representation tool resumed its working without any problems. It was like if the bug never happened.

Well, the moral is this: I can't imagine this story happening in
current operating systems.

I have to say, this mirrors my experience with my Symbolics, and I'm still a very naive user. It is incredible the amount of power and freedom you have in this kind of environment. As good as all of the Lisp implementations currently out there are, this is still a level above them. Even Lispworks has a ways to go to reach this kind of power.

I'd like to mention some of the other powerful features: a completely hyperspec'd online documentation system (so unfairly criticized by commenters on Lemonodor last week, not John, btw!), input and output that is fully aware of what types they are (McCLIM has this today, btw), and an overall high-level of cohesiveness between all of the tools on the machine. Nothing stands in your way, everything has been programmed to enhance the joy of writing Lisp code.

Another Article on Alan Kay

Via that techno-tabloid Slashdot, comes another article about Alan Kay, this time on Fortune.

I love these quotes:
"We're running on fumes technologically today," he (Alan Kay) says. "The sad truth is that 20 years or so of commercialization have almost completely missed the point of what personal computing is about."
and
today's PC is too dedicated to replicating earlier tools, like ink and paper. "[The PC] has a slightly better erase function but it isn't as nice to look at as a printed thing. The chances that in the last week or year or month you've used the computer to simulate some interesting idea is zero—but that's what it's for."
If you want to hear more about how uninspiring things are today, watch that video about Croquet that I linked to last week. Alan is very disapointed in the lack of progress in computer science today. He even bemoans that no one in computer science today has any understanding of its history. Too many times things are being invented without even realizing that similar ground has been already covered, oftentimes, better. As others have commented about the video, it's very depressing.

Trying to Grok Compiler Macros

Tayssir John Gabbour has been putting up some interesting links to some papers on the ALU site. One of them, Increasing Readability and Efficiency in Common Lisp by Antonio Menezes Leitao discussed some interesting uses of compiler macros. However, I was still pretty confused as to why one would use compiler macros instead of normal macros. Then I remembered that Tayssir also wrote up some lecture notes on lisp-user-meeting-amsterdam-april-2004, where Arthur Lemmens talked about compiler macros (lecture, demo). His lecture, although asture text (a good thing, btw) has a lot of great wisdom on when / how / why to use compiler macros. In particular, Arthur has this slide, which is the clearest explanation of compiler macros I've ever seen:
Only one reason to use compiler macros: speed.

Unlike normal macros, you shouldn't use compiler macros to
define syntactic extensions.

There are several reasons for this:

1. Technical: No guarantee that a compiler macro call is
ever expanded (similar to inline declarations).

2. Semantics: the goal of a compiler macro is to speed
up an existing function. A function has no access to
the program source in which it is used, so it can't
manipulate the program source. The effect of the
expanded compiler macro form should be the same as
the effect of the function, so it shouldn't do
anything that the function can't do.
After reading his slides, Section 3.2.2.1 Compiler Macros of the HyperSpec finally made sense.

Thanks to Tayssir and Arthur! By the way, the other papers by Antonio Menezes Leitao (linked on the ALU site above) are also good reads. Highly recommended.

Monday, July 05, 2004

No Croquet until September?

Looks like I was jumping the gun a bit on the Croquet announcement. The download and faq pages now say September 2004. Also, it looks like the permament home is here.