These kinds of projects are deeply inspirational. Striving to achieve one percent of this output would be enough for me. Knowing there are people like Belleve / Renzhi Li and their team in the world -- that I might be able to do something like what they do if I, too, try hard -- is what makes me get out of bed in the morning. It is incredible that there are people like them, doing what they do, and sharing it freely. Thank you so much.
I poked around on those links and I have yet failed to find any use of this Lisp in that font project. Looks like he just built everything with Javascript
Just so people are aware of it. It is not a good source if you want to learn how to make a lisp that could scale beyond a toy.
You can still learn a bit of C and get a taster of how to make a language, just be aware that some stuff you learn will hold you back in the long term.
>> In our Lisp, when we re-assign a name we’re going to delete the old association and create a new one. This gives the illusion that the thing assigned to that name has changed, and is mutable, but in fact we have deleted the old thing and assigned it a new thing.
> Please don’t do that. What would be the result of evaluating:
(let ((a 1))
(+ (let ((a 2)) a) a))
That is off the mark; that example doesn't re-assign anything; it is binding a shadowing instance of a.
Removing a binding and destructively creating a new one in the same environment is a viable strategy for implementing assignment, compared to mutating just the value part while keeping the same binding cell.
You can implement it so that the semantics is absolutely identical; the program simply cannot tell.
If you try to make it nondestructive somehow, then the program can tell; e.g. if you functionally rewrite the current environment to produce a new environment in which the variable is edited to the new value. then closures will reveal the trick. Previously captured closures keep seeing the old environment and old value (obviously inapplicable discussion to dynamic scope).
If a variable that has been captured by any number of closures is destructively assigned, all the closures must see the new value after the assignment, otherwise you have something other than assignment.
I’m sure the author of this is actually a nice person, but this is the first time I’ve actually seen a lisper being smug and obnoxious in the way I’ve seen people satirize a thousand times.
Compared to much of what went on comp.lang.lisp this is pretty gentle. Based on the impression it gives of the book, possibly more gentle than it deserves!
I think the issue is that the topic comes up quite often as seen by the author needing to create the gist. It easy to come across as a bit of "smug and obnoxious" when you explain something for the xth time.
I remember chatting with them years ago and they were pretty friendly. But yeah Lisp does have a specific culture and it can put people off thought I personally never encountered any bad apples.
If the argument was not born from ignorance I think it'd be fine ;) There's a rich history, decades old, of people claiming Lisp can't do X or is bad for Y, or is only a category and the true meaning of Lisp as a category is just having s-expression syntax. Some Lispers get more touchy when such things come up yet again. Attempts to correct misconceptions do work a little bit but just like bad and false popsci memes (remember when the "you only use 10% of your brain" myth was more popular?) the false memes seem to spread more easily. It takes a long time to overcome them, if ever.
With the exception of one or two sentences, it sounded pretty neutral to me... maybe I just didn't read it closely enough? The "dogmatism" of the technical criticism (assuming the post is accurate) really does seem justified to me—e.g. if the book uses dynamic scoping without even a disclaimer like "this is a bad idea, but we're doing it for simplicity", that alone makes it a bad recommendation for beginners IMO.
I don't know, it doesn't seem overly smug to me. It seems borne of experience and helpfulness.
"And a parser generator is very overkill for a Lisp parser - just use recursive decent!" -- it's interesting that, as simple as Lisp is to parse, this is the same advice you'd get from experienced C/C++ compiler writers, from modern Clang to Stroustrup in 1993.
"Build Your Own Lisp" is an opinionated, idiosyncratic take on one style of programming in C and one way to implement a Lisp-like language.
If you go into it treating it like an intro textbook to the "standard" way of implementing an interpreter, it may lead you astray. Worse, it may lead you astray without realizing it if it's your first book on the topic.
On the other hand, if you approach it as following an author as they deliberately wander off the well-trod path of implementing Lisps with recursive descent and all the other classic techniques, then you can have a good time and maybe get some interesting ideas out of it.
For what it's worth, I quite enjoyed the book. But I had enough programming language experience already to know when the author was teaching you the basics versus teaching you their own thing.
The more the world world becomes aggregated, summarized, averaged, and watered down, the more I crave stuff like this that is quirky and unique.
This is a solid take. I also had the same opinion on the book. You're better off doing the Norvig Lisp in Python journey then branching out on your own.
He makes many good points in his criticism but I'm still wondering what language one's supposed to use if not C.
My lisp has a conservative mark-and-sweep garbage collector. The mark phase spills all registers and then walks the entire stack looking for pointers to lisp objects. I managed to implement the stack walking through some convoluted pointer nonsense but even C could not express the notion of spilling registers, I had to write assembly code for each architecture.
I have no idea how such a garbage collector would be implemented in Rust or Zig.
I'd say that the best is probably the book "Lisp in Small Pieces".
Be aware that it's some 500 pages and makes several interpreters then a couple of compilers as I recall. If you read it though, you'll come out the other end with a pretty good understanding of how lisp implementations actually work.
It takes a very thoughtful approach to introducing an increasingly complex Scheme implementation. I doesn't shy away from the complexity that many LISP implementation tutorials try to push under the rug.
oh. the measure is how much code was used to implement it. at about 200 lines of javascript (to parse the text, convert it to json and render it as static HTML) it's at least a thousand times fewer lines of code as the early scala code that implemented twitter.
[edit - though it looks like handlebars (that I use for generating HTML from templates) has about 30k lines in files that end in .js, so maybe twitter is 30m lines of code?]
[second edit - hmm... maybe i'll change the name to "mini-blog" to signal it's a blog that's 25% the performance of a "mainframe-blog" but 10% the cost. though "mini-blog" might be more appropriate: halfway between a "normal" blog entry and a "micro-blog."]
As others have pointed out, this is as much about learning C as it is about making a Lisp.
If you're interested in the latter, Peter Norvig has a little project that builds a stripped down Scheme interpreter in python. Takes some shortcuts, provides only a few functions (about 30) to its environment and only recognizes 5 special forms (`quote`, `if`, `define`, `set!`, and `lambda`) but the whole thing is less than 150 lines and very informative if you're new to that kind of thing.
This is MUCH more than a 'Build Your Own Lisp'. To the point of almost being anything but.
This is an amazing resource for getting started with learning C by making your own "programming language", independent of any Lisp conventions.
For me, the most 'lispy' aspect of 'making your own lisp' is prebaked by the author with their using their own prebuilt parser library 'mpc'. (I was unable to find a link to the source in the book, so https://github.com/orangeduck/mpc )
I was unable to find any instance of 'car' or 'cdr' or 'caddar' and their like, which I feel is the real 'build your own lisp' epiphany.
The parser is so widely, and wildly, useful that it is independent of notation style, for instance, lisp's nearly ubiquitous 'polish notation'. (or its variants, for instance, 'cambridge polish notation')
Perfect example:
Under 'Chapter 9: Reading Expressions':
> Don't Lisps use Cons cells?
> Other Lisps have a slightly different definition of what an S-Expression is. In most other Lisps S-Expressions are defined inductively as either an atom such as a symbol of number, or two other S-Expressions joined, or cons, together.
> This naturally leads to an implementation using linked lists, a different data structure to the one we are using. I choose to represent S-Expressions as a variable sized array in this book for the purposes of simplicity, but it is important to be aware that the official definition, and typical implementation are both subtly different.
Unfortunately, part of the pragmatic usefulness of newer Lisps (Clojure and Janet) is the abandonment of CAR and CDR in favor of efficient distinct data structure types backed by efficient bases (tries, arrays, arraylists, hashmaps) and a focus on homoiconicity as the winning feature, separated forcefully from the idea of linked lists and CAR/CDR.
I'm honestly not sure why this sentence starts with "unfortunately".
Linked lists are comically inefficient on modern hardware and naming two of your fundamental operations based on CPU instructions from a chip from the 60s is not what I would call good API design.
1954, actually. But it's a minor thing. You don't need to know anything about the IBM 704 to write Lisp. A cons cell is a specific data structure and its components have specific, if unusual names. There's a lot of unintuitively named symbols in Lisp that are better targets for criticism than car and cdr.
> efficient distinct data structure types backed by efficient bases
...do not require abandoning CAR and CDR, I think. See all other lisps having vectors, hashmaps, etc. Perhaps you're talking for code representation ? But I don't think that true for all 'traditional' lisps either. Also, there's been some innovation regarding lists: https://docs.racket-lang.org/reference/treelist.html
From a compute standpoint I guess I'm wrong. But from a modeling power perspective, I definitely prefer the clarity and uniformity of Clojure's standard library functions. Maybe that has something to do with its distinct data types, maybe not.
Data structure types. In standard CL and Scheme, data structures are implemented at their base using cons cells, and their interpretation as tables, trees, queues, etc. is up to library functions. Unless I have somehow misinterpreted the selling points of classic Lisps, because Clojure and Janet data structures sell themselves as being not built this way. Clojure makes a different trade-off by building all its data structures off of hash array-mapped tries. But Janet goes out of its way to use efficient and closely-mapped base stores, even if the contained elements are dynamic Janet objects.
> In standard CL and Scheme, data structures are implemented at their base using cons cells
That's not true. Both CL and Scheme have other data structures besides cons cells, and that's been true for the Lisp family of languages for nearly 70 years now.
This bizarre belief that everything is a cons cell in Lisp and Scheme needs to go away.
For someone who initiated hostility, you're a dismal failure at supporting your arguments. I'm not reading two entire manuals to find the citations you're referring to and should've cited yourself.
> I'm not reading two entire manuals to find the citations you're referring to and should've cited yourself.
C-f that PDF for "array". For the other manual, I linked the TOC. It's right there on the page (arrays and hash tables, and you can follow up with structures and objects) and there's no reason to read the entire manual. I figured most people knew how to use a table of contents, I apologize if I expected too much from you.
i’m grateful to the author for making their work available online for free.
i once did an exercise like this myself (just the code not the book) for fun and found it extremely gratifying even though the code does not survive and never made it into any of my other projects as i had hoped at the outset.
mine got to be around 5 kloc with all the error handling but i wasn’t optimizing for keeping it short. i’m impressed by the many super brief ones that others with deeper understanding have built.
the point of view that this is really about learning C might have been buttressed further by starting with an existing super brief personal lisp and reading through that in a structured way; something that i personally would still like to do and that i semi-resorted to when debugging my way through the eval of the y-combinator which was one of the moments that exposed my poor design choices and the flaws i wasn’t cognizant of when doing simple expression evaluation. building a proper test harness was also a big deal as i went which seems like a highly relevant bit to highlight in a journey like this.
some references to existing high-quality short personal lisps and schemes might also be a welcome addition.
Is there a resource which compares Lisps (expressiveness, limitations, available special forms, ...)? I often read about lisp 1 and 2.0, clojure being a lisp 1.5 (because of the callable keywords if iirc).
Dabbling into llms I think that lisps could be very interesting format to expose tools to llms, ie prompting a llm to craft programs in a Lisp and then processing (by that I mean parsing, correcting, analyzing and evaluating the programs) those programs within the system to achieve the user's goal.
Do you mean Lisp-1 and Lisp-2 as in the number of namespaces?
https://dreamsongs.com/Separation.html - Goes into depth on the topic including pros and cons of each in the context of Common Lisp standardization at the time (ultimately arguing in favor of Lisp-2 for Common Lisp on grounds of practicality, but not arguing strictly for either in the future).
Common Lisp was a, more or less, unification of the various Lisps, not Scheme, that had developed along some path starting from Lisp 1.5 (some more direct than others). They were all Lisp-2s because they all kept the same Lisp 1.5 separation between functions and values. Scheme is a Lisp-1, meaning it unifies the namespaces. The two main differences you'll find are that in CL (and related Lisps) you'll need to use `funcall` where in Scheme you can directly use a function in the head position of an s-expr:
(let ((f ...)) ;; something evaluating to a function
(f ...)) ;; Scheme
(funcall f ...)) ;; Lisp
Like most of these tutorials, this stops right where things get interesting. Taill-Call Optimization, Continuations, CPS, Call/CC... those are the things that are tricky to implement and without those the language is only a toy language.
Then again, creating a toy-language is a worthwhile goal in itself, so kudos to everyone who follows this through to the end
Given how this is about building and compiling programming languages a portrait of Admiral Grace Hopper would have been more appropriate than Ada Lovelace.
Or Jean Sammet (author of a very comprehensive book on programming languages in the 1960s), or Lois Haibt (involved in development of original Fortran compiler) or Frances Allen (did key work at IBM on optimization in Fortran compilers), or Sister Mary Keller (involved in developing Dartmouth Basic), or Adele Goldberg (key developer of Smalltalk), or Barbara Liskov (CLU, and many other things)...those off the top of my head, though a web search would find many more, I'm sure.
I learned recently that the creator of the Iosevka typeface did so using their own Lisp implementation.
The typeface:
https://github.com/be5invis/Iosevka
The language:
https://github.com/be5invis/PatEL
Their tool which they used to build the language:
https://github.com/be5invis/patrisika
These kinds of projects are deeply inspirational. Striving to achieve one percent of this output would be enough for me. Knowing there are people like Belleve / Renzhi Li and their team in the world -- that I might be able to do something like what they do if I, too, try hard -- is what makes me get out of bed in the morning. It is incredible that there are people like them, doing what they do, and sharing it freely. Thank you so much.
PS: Re: inspo: Hope 16 is next week :D https://www.hope.net/pdf/hope_16_schedule.pdf
I poked around on those links and I have yet failed to find any use of this Lisp in that font project. Looks like he just built everything with Javascript
The glyphs themselves are written in PatEl: see https://github.com/be5invis/Iosevka/blob/main/packages/font-...
for instance. If you look through the commit history, you can see Be5invis gradually elaborating the font itself by editing these files.
Javascript is used as the build language, but not to define the characters.
Don’t Build Your Own Lisp: https://gist.github.com/no-defun-allowed/7e3e238c959e27d4919...
Just so people are aware of it. It is not a good source if you want to learn how to make a lisp that could scale beyond a toy.
You can still learn a bit of C and get a taster of how to make a language, just be aware that some stuff you learn will hold you back in the long term.
>> In our Lisp, when we re-assign a name we’re going to delete the old association and create a new one. This gives the illusion that the thing assigned to that name has changed, and is mutable, but in fact we have deleted the old thing and assigned it a new thing.
> Please don’t do that. What would be the result of evaluating:
That is off the mark; that example doesn't re-assign anything; it is binding a shadowing instance of a.Removing a binding and destructively creating a new one in the same environment is a viable strategy for implementing assignment, compared to mutating just the value part while keeping the same binding cell.
You can implement it so that the semantics is absolutely identical; the program simply cannot tell.
If you try to make it nondestructive somehow, then the program can tell; e.g. if you functionally rewrite the current environment to produce a new environment in which the variable is edited to the new value. then closures will reveal the trick. Previously captured closures keep seeing the old environment and old value (obviously inapplicable discussion to dynamic scope).
If a variable that has been captured by any number of closures is destructively assigned, all the closures must see the new value after the assignment, otherwise you have something other than assignment.
I’m sure the author of this is actually a nice person, but this is the first time I’ve actually seen a lisper being smug and obnoxious in the way I’ve seen people satirize a thousand times.
Compared to much of what went on comp.lang.lisp this is pretty gentle. Based on the impression it gives of the book, possibly more gentle than it deserves!
I think the issue is that the topic comes up quite often as seen by the author needing to create the gist. It easy to come across as a bit of "smug and obnoxious" when you explain something for the xth time.
I remember chatting with them years ago and they were pretty friendly. But yeah Lisp does have a specific culture and it can put people off thought I personally never encountered any bad apples.
I checked some of her other stuff and she seems great! I just wouldn’t want to argue with her about Lisp I think.
If the argument was not born from ignorance I think it'd be fine ;) There's a rich history, decades old, of people claiming Lisp can't do X or is bad for Y, or is only a category and the true meaning of Lisp as a category is just having s-expression syntax. Some Lispers get more touchy when such things come up yet again. Attempts to correct misconceptions do work a little bit but just like bad and false popsci memes (remember when the "you only use 10% of your brain" myth was more popular?) the false memes seem to spread more easily. It takes a long time to overcome them, if ever.
Well, they are writing about someone who purports to teach people how to implement programming languages, but then botches it rather badly.
With the exception of one or two sentences, it sounded pretty neutral to me... maybe I just didn't read it closely enough? The "dogmatism" of the technical criticism (assuming the post is accurate) really does seem justified to me—e.g. if the book uses dynamic scoping without even a disclaimer like "this is a bad idea, but we're doing it for simplicity", that alone makes it a bad recommendation for beginners IMO.
I don't know, it doesn't seem overly smug to me. It seems borne of experience and helpfulness.
"And a parser generator is very overkill for a Lisp parser - just use recursive decent!" -- it's interesting that, as simple as Lisp is to parse, this is the same advice you'd get from experienced C/C++ compiler writers, from modern Clang to Stroustrup in 1993.
Here's a more balanced take:
"Build Your Own Lisp" is an opinionated, idiosyncratic take on one style of programming in C and one way to implement a Lisp-like language.
If you go into it treating it like an intro textbook to the "standard" way of implementing an interpreter, it may lead you astray. Worse, it may lead you astray without realizing it if it's your first book on the topic.
On the other hand, if you approach it as following an author as they deliberately wander off the well-trod path of implementing Lisps with recursive descent and all the other classic techniques, then you can have a good time and maybe get some interesting ideas out of it.
For what it's worth, I quite enjoyed the book. But I had enough programming language experience already to know when the author was teaching you the basics versus teaching you their own thing.
The more the world world becomes aggregated, summarized, averaged, and watered down, the more I crave stuff like this that is quirky and unique.
You shouldn't use it to learn C either. It's full of bad C.
This is a solid take. I also had the same opinion on the book. You're better off doing the Norvig Lisp in Python journey then branching out on your own.
He makes many good points in his criticism but I'm still wondering what language one's supposed to use if not C.
My lisp has a conservative mark-and-sweep garbage collector. The mark phase spills all registers and then walks the entire stack looking for pointers to lisp objects. I managed to implement the stack walking through some convoluted pointer nonsense but even C could not express the notion of spilling registers, I had to write assembly code for each architecture.
I have no idea how such a garbage collector would be implemented in Rust or Zig.
The same exact way you did it in C - by relying on implementation-specific features such as assembly.
That sounds horrible. What are better resources?
MAL, the original "make a lisp", is a bit terse on details but gives you freedom to use different languages, and yet still provide a rough blueprint:
https://github.com/kanaka/mal
Discussed here, a few times:
https://news.ycombinator.com/from?site=github.com/kanaka
Thanks! Macroexpanded:
Mal – Make a Lisp - https://news.ycombinator.com/item?id=26924344 - April 2021 (40 comments)
Mal – Make a Lisp, implemented in 79 languages - https://news.ycombinator.com/item?id=21670442 - Nov 2019 (11 comments)
Mal – Make a Lisp, in 68 languages - https://news.ycombinator.com/item?id=15226110 - Sept 2017 (69 comments)
Make your own Lisp - https://news.ycombinator.com/item?id=13967401 - March 2017 (32 comments)
Mal – Make a Lisp - https://news.ycombinator.com/item?id=12720777 - Oct 2016 (1 comment)
Make a Lisp in Nim - https://news.ycombinator.com/item?id=9145360 - March 2015 (44 comments)
Make a Lisp - https://news.ycombinator.com/item?id=9121448 - Feb 2015 (41 comments)
Lisp implemented in under 1K of JavaScript - https://news.ycombinator.com/item?id=9109225 - Feb 2015 (16 comments)
MAL is pretty good, albeit less hand-holding: https://github.com/kanaka/mal
Writing a Lisp in Ocaml: https://bernsteinbear.com/blog/lisp/00_fundamentals/
Not Lisp-specific but Crafting Interpreters is also great: https://craftinginterpreters.com/
I'd say that the best is probably the book "Lisp in Small Pieces".
Be aware that it's some 500 pages and makes several interpreters then a couple of compilers as I recall. If you read it though, you'll come out the other end with a pretty good understanding of how lisp implementations actually work.
Lisp In Small Pieces by Christian Queinnec
It takes a very thoughtful approach to introducing an increasingly complex Scheme implementation. I doesn't shy away from the complexity that many LISP implementation tutorials try to push under the rug.
I started from https://github.com/videogamepreservation/abuse which ... well, isn't perfect but it IS fun and the original game was, too.
I just nano-blogged my thoughts about this yesterday: https://BI6.US/CO/N/20250803.HTML#/080401
Microblogs are 200-500 characters, so I was expecting a nanoblog to me to be an order of magnitude less than that :sweat_smile:
Or technically, three orders of magnitude smaller? (half a character per post)
oh. the measure is how much code was used to implement it. at about 200 lines of javascript (to parse the text, convert it to json and render it as static HTML) it's at least a thousand times fewer lines of code as the early scala code that implemented twitter.
[edit - though it looks like handlebars (that I use for generating HTML from templates) has about 30k lines in files that end in .js, so maybe twitter is 30m lines of code?]
[second edit - hmm... maybe i'll change the name to "mini-blog" to signal it's a blog that's 25% the performance of a "mainframe-blog" but 10% the cost. though "mini-blog" might be more appropriate: halfway between a "normal" blog entry and a "micro-blog."]
also... I'm a computer scientist so "three orders of magnitude" means 8. ;-)
As others have pointed out, this is as much about learning C as it is about making a Lisp.
If you're interested in the latter, Peter Norvig has a little project that builds a stripped down Scheme interpreter in python. Takes some shortcuts, provides only a few functions (about 30) to its environment and only recognizes 5 special forms (`quote`, `if`, `define`, `set!`, and `lambda`) but the whole thing is less than 150 lines and very informative if you're new to that kind of thing.
https://norvig.com/lispy.html
The iso-9899.info C language website also includes this book in their "Stuff That Should Be Avoided" section.
https://www.iso-9899.info/wiki/Books#Stuff_that_should_be_av...
I concur with Stuff That Should Be Avoided.
That page also seems to recommend using lex and yacc, so … what are we to do with this information?
Nothing wrong with lex and yacc.
Or GNU flex and bison. ;o)
I feel called out - I have read and learned from 4 of the things on that list.
I am also really bad at C, though, so "Can confirm", I guess?
Huh? It doesn't
Edit: nvm, found it on the bigger list
Related:
Build Your Own Lisp - https://news.ycombinator.com/item?id=36103946 - May 2023 (12 comments)
Learn C and build your own Lisp (2014) - https://news.ycombinator.com/item?id=35726033 - April 2023 (45 comments)
Learn C and build your own Lisp (2014) - https://news.ycombinator.com/item?id=27598424 - June 2021 (86 comments)
Learn C and Build Your Own Lisp (2014) - https://news.ycombinator.com/item?id=17478489 - July 2018 (86 comments)
Learn C and build your own Lisp - https://news.ycombinator.com/item?id=10474717 - Oct 2015 (49 comments)
Learn C and build your own Lisp - https://news.ycombinator.com/item?id=7530427 - April 2014 (145 comments)
This is MUCH more than a 'Build Your Own Lisp'. To the point of almost being anything but.
This is an amazing resource for getting started with learning C by making your own "programming language", independent of any Lisp conventions.
For me, the most 'lispy' aspect of 'making your own lisp' is prebaked by the author with their using their own prebuilt parser library 'mpc'. (I was unable to find a link to the source in the book, so https://github.com/orangeduck/mpc )
I was unable to find any instance of 'car' or 'cdr' or 'caddar' and their like, which I feel is the real 'build your own lisp' epiphany.
https://en.wikipedia.org/wiki/CAR_and_CDR
The parser is so widely, and wildly, useful that it is independent of notation style, for instance, lisp's nearly ubiquitous 'polish notation'. (or its variants, for instance, 'cambridge polish notation')
Perfect example:
Under 'Chapter 9: Reading Expressions':
> Don't Lisps use Cons cells?
> Other Lisps have a slightly different definition of what an S-Expression is. In most other Lisps S-Expressions are defined inductively as either an atom such as a symbol of number, or two other S-Expressions joined, or cons, together.
> This naturally leads to an implementation using linked lists, a different data structure to the one we are using. I choose to represent S-Expressions as a variable sized array in this book for the purposes of simplicity, but it is important to be aware that the official definition, and typical implementation are both subtly different.
https://www.buildyourownlisp.com/chapter9_s_expressions#Read...
This is an awesome educational resource.
I think I would promote it more broadly than "Build Your Own Lisp".
Unfortunately, part of the pragmatic usefulness of newer Lisps (Clojure and Janet) is the abandonment of CAR and CDR in favor of efficient distinct data structure types backed by efficient bases (tries, arrays, arraylists, hashmaps) and a focus on homoiconicity as the winning feature, separated forcefully from the idea of linked lists and CAR/CDR.
I'm honestly not sure why this sentence starts with "unfortunately".
Linked lists are comically inefficient on modern hardware and naming two of your fundamental operations based on CPU instructions from a chip from the 60s is not what I would call good API design.
1954, actually. But it's a minor thing. You don't need to know anything about the IBM 704 to write Lisp. A cons cell is a specific data structure and its components have specific, if unusual names. There's a lot of unintuitively named symbols in Lisp that are better targets for criticism than car and cdr.
> efficient distinct data structure types backed by efficient bases
...do not require abandoning CAR and CDR, I think. See all other lisps having vectors, hashmaps, etc. Perhaps you're talking for code representation ? But I don't think that true for all 'traditional' lisps either. Also, there's been some innovation regarding lists: https://docs.racket-lang.org/reference/treelist.html
I don't well understand this. How is clojure more efficient than common lisp?
Would you explain further?
From a compute standpoint I guess I'm wrong. But from a modeling power perspective, I definitely prefer the clarity and uniformity of Clojure's standard library functions. Maybe that has something to do with its distinct data types, maybe not.
What do you mean with 'distinct data types' ? Most lisps are strongly typed. SBCL even accepts type declarations.
Data structure types. In standard CL and Scheme, data structures are implemented at their base using cons cells, and their interpretation as tables, trees, queues, etc. is up to library functions. Unless I have somehow misinterpreted the selling points of classic Lisps, because Clojure and Janet data structures sell themselves as being not built this way. Clojure makes a different trade-off by building all its data structures off of hash array-mapped tries. But Janet goes out of its way to use efficient and closely-mapped base stores, even if the contained elements are dynamic Janet objects.
> In standard CL and Scheme, data structures are implemented at their base using cons cells
That's not true. Both CL and Scheme have other data structures besides cons cells, and that's been true for the Lisp family of languages for nearly 70 years now.
This bizarre belief that everything is a cons cell in Lisp and Scheme needs to go away.
Thanks for courteously linking me to the relevant documents! Very productive and good-natured of you.
Flavors, from 1979 or so.
https://www.softwarepreservation.org/projects/LISP/MIT/nnnfl...
LOOPS, from about the same time.
https://interlisp.org/documentation/2024-loops-book-1.pdf
More general discussion in a OOPSLA contribution from 1986.
https://web.archive.org/web/20220817140051/https://interlisp...
The 1988 book about CLOS, the approach that was later accepted in ANSI Common Lisp.
https://doc.lagout.org/programmation/Lisp/Object-Oriented%20...
> Thanks for courteously linking me to the relevant documents! Very productive and good-natured of you.
Thanks for the sarcasm! Very productive and good-natured of you.
For your reference:
LISP 1.5 manual: https://www.softwarepreservation.org/projects/LISP/book/LISP...
Arrays were present in 1960. Admittedly, not much else but clear evidence that even then it wasn't just cons cells.
https://www.lispworks.com/documentation/HyperSpec/Front/Cont... - Common Lisp Hyper Spec which describes data structures other than lists and cons cells.
For someone who initiated hostility, you're a dismal failure at supporting your arguments. I'm not reading two entire manuals to find the citations you're referring to and should've cited yourself.
> I'm not reading two entire manuals to find the citations you're referring to and should've cited yourself.
C-f that PDF for "array". For the other manual, I linked the TOC. It's right there on the page (arrays and hash tables, and you can follow up with structures and objects) and there's no reason to read the entire manual. I figured most people knew how to use a table of contents, I apologize if I expected too much from you.
Someone seems to have saved my old self-compiling scheme-to-c compiler in about 1k lines of scheme code. https://github.com/veqqq/llvm_scheme/blob/main/compile.ccode... (also an llvm version)
Maybe I should read and compare it. Mine was a really slow Poc to inspired by SICP, is that book still used in courses somewhere?
i’m grateful to the author for making their work available online for free.
i once did an exercise like this myself (just the code not the book) for fun and found it extremely gratifying even though the code does not survive and never made it into any of my other projects as i had hoped at the outset.
mine got to be around 5 kloc with all the error handling but i wasn’t optimizing for keeping it short. i’m impressed by the many super brief ones that others with deeper understanding have built.
the point of view that this is really about learning C might have been buttressed further by starting with an existing super brief personal lisp and reading through that in a structured way; something that i personally would still like to do and that i semi-resorted to when debugging my way through the eval of the y-combinator which was one of the moments that exposed my poor design choices and the flaws i wasn’t cognizant of when doing simple expression evaluation. building a proper test harness was also a big deal as i went which seems like a highly relevant bit to highlight in a journey like this.
some references to existing high-quality short personal lisps and schemes might also be a welcome addition.
Is there a resource which compares Lisps (expressiveness, limitations, available special forms, ...)? I often read about lisp 1 and 2.0, clojure being a lisp 1.5 (because of the callable keywords if iirc).
Dabbling into llms I think that lisps could be very interesting format to expose tools to llms, ie prompting a llm to craft programs in a Lisp and then processing (by that I mean parsing, correcting, analyzing and evaluating the programs) those programs within the system to achieve the user's goal.
> lisp 1 and 2.0
Do you mean Lisp-1 and Lisp-2 as in the number of namespaces?
https://dreamsongs.com/Separation.html - Goes into depth on the topic including pros and cons of each in the context of Common Lisp standardization at the time (ultimately arguing in favor of Lisp-2 for Common Lisp on grounds of practicality, but not arguing strictly for either in the future).
Common Lisp was a, more or less, unification of the various Lisps, not Scheme, that had developed along some path starting from Lisp 1.5 (some more direct than others). They were all Lisp-2s because they all kept the same Lisp 1.5 separation between functions and values. Scheme is a Lisp-1, meaning it unifies the namespaces. The two main differences you'll find are that in CL (and related Lisps) you'll need to use `funcall` where in Scheme you can directly use a function in the head position of an s-expr:
I may be old. But use to use the pure Lisp Chatlin included in C with his papers to learn languages a few times.
Not as complete as this version of the language, but apparently ~300 lines of even REXX was enough although I never tried REXX.
I've been working this problem from different angles for a while now:
https://github.com/codr7/shi-c
https://github.com/codr7/hacktical-c
Are there any such build-a-lisp books/guides using modern c++?
Like most of these tutorials, this stops right where things get interesting. Taill-Call Optimization, Continuations, CPS, Call/CC... those are the things that are tricky to implement and without those the language is only a toy language.
Then again, creating a toy-language is a worthwhile goal in itself, so kudos to everyone who follows this through to the end
Given how this is about building and compiling programming languages a portrait of Admiral Grace Hopper would have been more appropriate than Ada Lovelace.
Or Jean Sammet (author of a very comprehensive book on programming languages in the 1960s), or Lois Haibt (involved in development of original Fortran compiler) or Frances Allen (did key work at IBM on optimization in Fortran compilers), or Sister Mary Keller (involved in developing Dartmouth Basic), or Adele Goldberg (key developer of Smalltalk), or Barbara Liskov (CLU, and many other things)...those off the top of my head, though a web search would find many more, I'm sure.
Oh, I forgot Lynn Conway (computer architecture)
or just ask AI to do it for you :D
https://gist.github.com/intellectronica/593885fcb02b0d10c4b9...