Cooking with Lisp

Another blog about Lisp, the world's greatest programming language.

Thursday, February 02, 2006

How to Call a Setf Method Directly

Yesterday I wanted to wrap a library's setf method in another setf method that I was writing, but I wasn't sure how to directly call the other library's setf method.

Say I have the following:
(defclass foo ()
((foo-slot-1 :accessor foo-slot-1)))

(defparameter *a-foo* (make-instance 'foo))
Then if I macroexpand the call to (using SBCL):
(macroexpand-1 '(setf (foo-slot-1 *a-foo*) 99))
I get the following (with some cleaning up of the gensyms):
(let* ((#:temp1 *a-foo*))
(multiple-value-bind (#:temp0)
(funcall #'(setf foo-slot-1) #:temp0 #:temp1)))
;; i.e., (funcall #'(setf foo-slot-1) 99 *a-foo*)
That function argument #'(setf foo-slot-1) looks pretty strange, so I tried finding out what is going on here.

After a lot of research, here's what I found.

From the HyperSpec, funcall has the following syntax:
funcall function &rest args => result*
function---a function designator.
The glossary entry for 'function designator' isn't too helpful
function designator n. a designator for a function; that is, an object that denotes a function and that is one of: a symbol (denoting the function named by that symbol in the global environment), or a function (denoting itself). The consequences are undefined if a symbol is used as a function designator but it does not have a global definition as a function, or it has a global definition as a macro or a special form. See also extended function designator.
Since #'(setf a) is clearly not a symbol, it must be a function, but that's certainly a weird way to designate a function.

The entry for "extended function designator" isn't by itself any more useful, but provides a small clue:
extended function designator n. a designator for a function; that is, an object that denotes a function and that is one of: a function name (denoting the function it names in the global environment), or a function (denoting itself). The consequences are undefined if a function name is used as an extended function designator but it does not have a global definition as a function, or if it is a symbol that has a global definition as a macro or a special form. See also function designator.
The key here is "function name":
function name n. 1. (in an environment) A symbol or a list (setf symbol) that is the name of a function in that environment. 2. A symbol or a list (setf symbol).
Additionally, there's also these entries:
setf function n. a function whose name is (setf symbol).

setf function name n. (of a symbol S) the list (setf S).
Aha! So the only two ways to specify a function name is with a symbol (the case we're all familiar with) or the list (setf symbol).

So, we're pretty close. The list (setf foo-slot-1) is a setf function name. You'll note that funcall takes a function designator and not an extended function designator, that is, it takes functions, not function names. Of course, the way to get from function names to functions is to use the function "function" or the better know reader macro #'. Thus, we finally have #'(setf foo-slot-1), which is equivilent to (function (setf foo-slot-1)) (no quote needed for (setf foo-slot-1) because function is a special operator. So, if you type #'(setf foo-slot-1), you'll get back something like #.

The final oddity is the ordering of the arguments, it appears to be

(funcall #(setf symbol) newvalue oldvalue)

This is explained in the sections explaining how setf expansion works: Other Compound Forms as Places

For any other compound form for which the operator is a symbol f, the setf form expands into a call to the function named (setf f). The first argument in the newly constructed function form is newvalue and the remaining arguments are the remaining elements of place. This expansion occurs regardless of whether f or (setf f) is defined as a function locally, globally, or not at all. For example,

(setf (f arg1 arg2 ...) new-value)

expands into a form with the same effect and value as
(let ((#:temp-1 arg1)          ;force correct order of evaluation
(#:temp-2 arg2)
(#:temp-0 new-value))
(funcall (function (setf f)) #:temp-0 #:temp-1 #:temp-2...))
A function named (setf f) must return its first argument as its only value in order to preserve the semantics of setf.
Putting the new value first allows everything following to look just like the call to the accessing function, even if it has lots of additional lambda keyword arguments, for example

(setf (mumble object :keyword1 :keyword2) newvalue)

being transformed into something like

(funcall #'(setf mumble) newvalue :keyword1 :keyword2)

Anyway, I was able to successfully call the other library's setf method by following the above example.

My final thoughts, I have a huge amount of respect for the people able to read between the lines of the spec and be able to correctly implement all of the nooks and crannies of Common Lisp.

Tuesday, September 20, 2005

Lisp is older than you are

Great quote in the c.l.l thread The complete lisp+ package by James Crippen:
Lisp is older than you are. Lisp is older than most people posting to this group, if not all of them. With such age comes a tendency towards slow deliberation, a sense that the time spent mulling over decisions and examining all possibilities is in the end more fruitful.

Friday, July 29, 2005

Public Thanks for the Movies

I want to publicly thank Marco Baringer and Rainer Joswig for the recent movies they've provided lately. I think they're great demos for someone starting out or for showing someone interested in Common Lisp.

I had a non-Lisper watch Rainer's movie on DSL creation and he certainly came away pretty impressed by how fast and easy it was to do what Rainer demonstrated.

I really want to thank Marco for providing such a full-featured demonstration of SLIME. A lot of people starting out in Lisp don't quite get how to manage the minute-by-minute development tasks of editing code, debugging it, looking up documentation, etc. Especially debugging. I remember when I started out in Lisp a long time ago, it was really hard for me to wrap my head around what the debugger was really trying to tell me and how to effectively use it. Marco does a great job of showing how to use the SLIME debugger.

I think that just about everyone can learn something about SLIME that they weren't aware of. Even though I was aware of them, I came away with a better awareness of the cross referencing tools, as I never really bothered to learn how they work.

Once of the interesting things that Marco shows is his use of structured editing, that is, '(' creating balanced parens and ')' moving past the closing paren. He discusses this in his Editing Lisp Code in Emacs page on CLiki (which is a very helpful guide if you haven't read it yet) and I remember trying those before and not really being happy about how it worked, but seeing how it worked in the video, I'm going to try them again.

w3m Customization

From watching Marco's slime movie yesterday, I noticed that he's using w3m for browsing HyperSpec pages from within Emacs. I've been doing that as well for about a year and found a couple of w3m settings that make it a bit nicer:

(setq w3m-symbol 'w3m-default-symbol)

w3m-symbol isn't set to w3m-default-symbol by default. With out of the box settings, HyperSpec pages get those weird box characters at the top of the page, as seen in Marcro's movie. That's caused by some encoding between character sets, as w3m is Japanese based software, so it's got Asian encodings set by default. Setting this to w3m-default-symbol makes it look nicer.

(setq w3m-key-binding 'info)

This sets the key bindings to match closer to what info-mode uses, which at least for me, my brain prefers to use from within Emacs. The other option for this is a Lynx mode setting, so you may want to try that if you're familiar with it.

Both of these are customizeable variables, so you don't need to actually setq them.

I find w3m to work really well for documenting Lisp. HyperSpec looks good, CLtL2 looks good, Practical Common Lisp looks good. It's real simple to use bookmarks in w3m, just hit 'a' and the page will be added, 'v' shows you the bookmarks, and 'e' allows you to edit the page w3m is viewing (including the bookmarks page). Bookmarks have decent organization to them and it's relatively easy to edit them to organize them better.

Thursday, July 14, 2005

People learning Lisp today have it so much easier

Paul McJones' History of Lisp has been blogged before (here, here, here, and here), but it keeps getting new content. It's gotten PDFs of 3 early Lisp books from the 1960's, chronologically:

Lisp 1.5 Programmer's Manual (1962) (the year I was born)
The Programming Language LISP: Its Operation and Applications (1964)
LISP 1.5 Primer (1967)

I had read all of them in 1980 when I entered college and went scouring through the university's library for any books on Lisp. I've got the first two, as Amazon still sells the Programmer's Manual new and the second book you can get used.

Modern day Lispers should definitely take a look at these and see how much the language has changed.

Especially take a look at "Lisp 1.5 Primer". This was what I actually learned Lisp from. The horror. It starts off with a couple dozen pages on dotted notation, there's no quote reader macro, you don't start defining functions until page 66 or so, and factorial doesn't get defined until page 96. It took a lot of effort to wade through all of that. I remember trying to code up some of the examples in some weird Lisp 1.5 variant that was on the PDP-11/70 at the time, which was different enough from "real" Lisp 1.5 to make the process exceptionally painful.

But look at that second book and be amazed at how much people got done with Lisp 1.5 in the early 1960's, and remember that they didn't have Emacs at the time, they punched this all in on paper cards and submitted them to the mainframe. Imagine trying to get the number of closing parenthesis right for an expression many cards back. Some of the programs are advanced even for today, although we would implement them far differently today. Those pioneers in the 1960's didn't shy away from the hard problems because they didn't have a fancy IDE with syntax coloring or auto-completion. Note that you may have heard of some of the contributers: Daniel G. Bobrow and L. Peter Deutsch (who was only a teenager when he wrote the implementation for the PDP-1).

Thursday, April 21, 2005

Take That, O'Reilly!

I just received my copy of Practical Common Lisp! About time, especially since I preordered it. Very nice hardcover, will be able to withstand rigourous hacking sessions :)

We knew that Peter Norvig was doing the blurb on the back, but on the inside cover, there's praise from a who's who in the Lisp world: Scott Fahlman, Philip Greenspun, John Foderaro, Christian Queinnec, and a few more. Pretty powerful endorsements.

Congratulations, Peter!

Monday, April 11, 2005

Doing My Part to Sell Practical Common Lisp

What can I say that others haven't said about Peter Seibel's new Practical Common Lisp?

It is now the best written introduction-to-intermediate Common Lisp book you can buy. It's great reading, the practical focus of the examples are well chosen and meaningful, and he jumps right in with the power of Lisp, instead of doing the pedagogically boring "this is a list" and "this is cons, car, and cdr".

I especially like the domain-specific language approach the book uses. In particular, Chapter 24 - Practical: Parsing Binary Files, and the later chapters are excellent examples of what you can really do with macros. In fact, I think Chapter 24 is standalone to point that, with some warning, you could show it to someone interested in the kind of power you get with macros, but doesn't want to wade through the whole book to get to the punchline. That person wouldn't understand a lot of the code, but would certainly see the ability of creating macros that can write a significant amount of straightforward boilerplate code.

I can't wait to get my copy.

Haskell and the Perl Community

I've spent the last 6 months heavily studying Haskell and the semantics of programming languages. Lurking on Lambda the Ultimate has been a great resource for this. Although I was really into functional programming in the early 80's when I was in college, I really didn't pay attention to the FP world since then. I was very impressed by the amount of progress the FP community has made in areas such as type inferencing and things like monads for i/o. I would say that they progress faster than any other programming language community. Haskell is a very interesting and powerful language and I encourage Lispers to take a serious look at it. It is now my 2nd favorite language.

Now, the reason I mention this is that something very interesting is happening between the Haskell and Perl communities. They are starting to cross-fertilize. I don't think you can imagine a more stranger pairing. This started in February, when Autrijus Tang started writing a Perl 6 compiler in Haskell. He started on February 1st and had the first version in 6 days!

Here's why he chose Haskell:
Many Perl 6 features have similar counterparts in Haskell: Perl 6 Rules corresponds closely to Parsec; lazy list evaluation is common in both languages; continuation support can be modeled with the ContT monad transformer, and so on. This greatly simplified the prototyping effort: the first working interpreter was released within the first week, and by the third week we have a full-fledged module for unit testing.
A large portion of a Perl 6 compiler and interpreter was only 4,000 lines of Haskell. That's an incredible amount of productivity and expressive power.

You can follow Autrijus' blog to read the phenomenal daily progress.

Now, even more amazing, is that the project has drawn members from both the Perl and Haskell communities to contribute code. The Perl folks' new experience with Haskell is even feeding back into the Perl 6 design process.

I can't think of any other case where two different programming language cultures literally at the opposite sides of the universe have come together like this.

This interview with Autrijus is very interesting reading on how he got hooked up with Haskell, his experiences with it, and the cross-fertilization.

Perl 6 Now

Perl 6 Now - The Core Ideas Illustrated with Perl 5 is the second book that will be of interest to Lispers who need to use Perl. It talks about the features in the upcoming Perl 6 and shows how to use the various CPAN modules that implement most of those features today in Perl 5.

The last 4 chapters are particularly interesting, as they cover the new set operators (any and all), lexical closures, continuations, and coroutines.

One thing that I learned from this book that I wasn't aware of is that every {} block, no matter the context, will be a full, first-class closure. It's nice that Perl 6 will be supporting such a nice, lightweight syntax for that. Perl 6 is going to be a very interesting language, and Lispers and other functional programmers (like Haskellers and O'Camlers) will be well positioned to take advantage of these features.

I really appreciate that the Perl community is embracing the powerful ideas found in Lisp and Haskell, unlike the recent news from the Python community about starting to consider removing some of the "redundant" functional features.

Higher Order Perl

For those who regularly use Perl, you might be interested in two new books that show how to use Perl in more Lisp-ish ways.

The first is Higher-Order Perl, by Mark Jason Dominus. It's basically closures on steroids for Perl. Topics covered are recursion, iterators & generators, memoization, higher-order functions, combinator-style parsing ala Haskell, and domain-specific language generation. It is chock full of goodness.

Mark is a long-time Lisp and Haskell user and mentions influential books in his preface such as Norvig's PAIP ML for the Working Programmer, SICP, and Bird's Introduction to Functional Programming.

Here's what he has to say about Lisp in his preface:
Around 1993 I started reading books about Lisp, and I discovered something important: Perl is much more like LIsp than it is like C. If you pick up a good book about Lisp, there will be a section that describes Lisp's good features. For example, the book Paradigms of Artificial Intelligence Programming, by Peter Norvig, includes a section titled What Makes Lisp Different? that describes seven features of Lisp. Perl shares six of these features; C shares none of them. These are big, important features, features like first-class functions, dynamic access to the symbol table, and automatic storage management. Lisp programmers have been using these features since 1957. They know a lot about how to use these language features in powerful ways. If Perl programmers can find out the things that Lisp programmers already know, they will learn a lot of things that will make their Perl programming jobs easier.
You can read more about the book at this interview.

Tuesday, July 13, 2004

António Menezes Leitão's Recollections of Working with a Lisp Machine

This is another gem linked by Tayssir John Gabbour on the Paper page of ALU. Here, António describes his daily usage of a TI Explorer.
you had all the source code on your hands. You could see and modify the actual code used by the machine, compile it, and the machine would behave immediately according to your modifications. Your code was on exactly the same stand as the system code.
Again, I want to stress that you had all the code for this at your finger-tips. Really. Is nothing like having the sources of Linux. On the Explorers (and presumably on all the other Lisp Machines) when you wanted to see what was going on, you just need to hit the break key (or select a process from the processes inspector), look at the window debugger, point at some function, edit it, read its documentation, etc. You could also modify it, compile it, restart the computation from some point, etc. You could also look at the data-structures, inspect them, modify them, look at the class hierarchy, include some extra mixins to a class definition, compile them, restart, etc, etc.
And then, he has this great anecdote about being able to modify some Lisp program that he didn't have the source to:
I found a bug in one function of that tool. The bug occurred because the function was recursive (tail-recursive, actually) and we were using some very unusually deep class hierarchies and the stack blowed up. Note that there were no hard-limits on the stack size (AFAIK), but the system warned you when the stack reached some pre-defined limits. You could just tell the system to increase the stack size and continue the operation (really, it was just a sort of yes-or-no question, either abort or continue with a larger stack) but this was not very practical from the standpoint of the user of our software, so I decided to correct the bug. Meta-Control-w (If I remember correctly) and there I was looking at the marvelous window debugger, seeing the stack, the local variables, the objects, everything that I wanted. Note that the tool we were using _didn't_ come with sources. Of course, nothing prevented us to look at the disassembled code, so that's what I did. This disassembly was so clear, so documented, that I could generate a copy of the original function within minutes. Well, at least, it compiled to exactly the same code. With that copy, it was now very simple to convert it to an iterative loop, compile it again, restart it, and there it was, the knowledge representation tool resumed its working without any problems. It was like if the bug never happened.

Well, the moral is this: I can't imagine this story happening in
current operating systems.

I have to say, this mirrors my experience with my Symbolics, and I'm still a very naive user. It is incredible the amount of power and freedom you have in this kind of environment. As good as all of the Lisp implementations currently out there are, this is still a level above them. Even Lispworks has a ways to go to reach this kind of power.

I'd like to mention some of the other powerful features: a completely hyperspec'd online documentation system (so unfairly criticized by commenters on Lemonodor last week, not John, btw!), input and output that is fully aware of what types they are (McCLIM has this today, btw), and an overall high-level of cohesiveness between all of the tools on the machine. Nothing stands in your way, everything has been programmed to enhance the joy of writing Lisp code.

Another Article on Alan Kay

Via that techno-tabloid Slashdot, comes another article about Alan Kay, this time on Fortune.

I love these quotes:
"We're running on fumes technologically today," he (Alan Kay) says. "The sad truth is that 20 years or so of commercialization have almost completely missed the point of what personal computing is about."
today's PC is too dedicated to replicating earlier tools, like ink and paper. "[The PC] has a slightly better erase function but it isn't as nice to look at as a printed thing. The chances that in the last week or year or month you've used the computer to simulate some interesting idea is zero—but that's what it's for."
If you want to hear more about how uninspiring things are today, watch that video about Croquet that I linked to last week. Alan is very disapointed in the lack of progress in computer science today. He even bemoans that no one in computer science today has any understanding of its history. Too many times things are being invented without even realizing that similar ground has been already covered, oftentimes, better. As others have commented about the video, it's very depressing.

Trying to Grok Compiler Macros

Tayssir John Gabbour has been putting up some interesting links to some papers on the ALU site. One of them, Increasing Readability and Efficiency in Common Lisp by Antonio Menezes Leitao discussed some interesting uses of compiler macros. However, I was still pretty confused as to why one would use compiler macros instead of normal macros. Then I remembered that Tayssir also wrote up some lecture notes on lisp-user-meeting-amsterdam-april-2004, where Arthur Lemmens talked about compiler macros (lecture, demo). His lecture, although asture text (a good thing, btw) has a lot of great wisdom on when / how / why to use compiler macros. In particular, Arthur has this slide, which is the clearest explanation of compiler macros I've ever seen:
Only one reason to use compiler macros: speed.

Unlike normal macros, you shouldn't use compiler macros to
define syntactic extensions.

There are several reasons for this:

1. Technical: No guarantee that a compiler macro call is
ever expanded (similar to inline declarations).

2. Semantics: the goal of a compiler macro is to speed
up an existing function. A function has no access to
the program source in which it is used, so it can't
manipulate the program source. The effect of the
expanded compiler macro form should be the same as
the effect of the function, so it shouldn't do
anything that the function can't do.
After reading his slides, Section Compiler Macros of the HyperSpec finally made sense.

Thanks to Tayssir and Arthur! By the way, the other papers by Antonio Menezes Leitao (linked on the ALU site above) are also good reads. Highly recommended.