I wonder if EWD would have had the same opinion if he were alive today, with every Unicode font having the APL characters immediately available on the screen.
Did he feel the language design was bad, or would having TTF fonts being able to show "rho", "iota", "grade up" have removed one or more of his objections?
One can appreaciate striving for simplicity (a programming language that can be taught and explained with pen and paper), but one must also consider that computers are meta-devices.
Before computers, we could write things only on paper, either with our hands or a typewriter. So, naturally, when computers came about, the way of thinking about programming was very text-driven, with an emphasis on what a typewriter could represent.
But then, code could be written directly with computers, opening up more typesetting possibilities thanks to keyboards not being bound anymore by the mechanical limitations of typewriters. You could add keys and combinations to your heart's desire, and they would be natively digital and unlimited.
Now, with graphics, both 2D and 3D, and a myriad or other HIDs, shouldn't we try to make another cognitive jump?
It's very strange to see handwriting lumped in with typewriting, to be described as limited relative to screens! Iverson notation was a 2D format (both in handwriting and typeset publications) making use of superscripts, subscripts, and vertical stacking like mathematics. It was linearized to allow for computer execution, but the designers described this as making the language more general rather than less:
> The practical objective of linearizing the typography also led to increased uniformity and generality. It led to the present bracketed form of indexing, which removes the rank limitation on arrays imposed by use of superscripts and subscripts.
I think this is more true than they realized at that time. The paper describes the outer product, which in Iverson notation was written as a function with a superscript ∘ and in APL became ∘. followed by the function. In both cases only primitive functions were allowed, that is, single glyphs. However, APL's notation easily extends to any function used in an outer product, no matter how long. But Iverson notation would have you write it in the lower half of the line, which would quickly start to look bad.
All those things can be specified in text. Fortress was a language that had the facility to use mathematical notation. Turned out to be not so compelling iirc.
We do have syntax highlighting these days. And our editors work like hypertext, where I can go to definitions, find usages, get inheritance hierarchies etc. Quite a ways from your suggestion, but also a few steps removed from a type writer.
I think any such leap would have to be a really big one to catch on though, due to inertia. Colorforth is not exactly popular, and I can't think of any other examples.
If you are expecting someone to learn a completely new notational language before you can communicate a basic algorithm, you have gone wrong somewhere.
You could also similarly write down merge sort in lambda calculus, which is interesting as an exercise, but not especially useful as working code, or as a way to explain how merge sort works.
The opening paragraphs about how people enamoured by a shiny gadget will overlook a terrible interface brings immediately to my mind the modern day LLMs.
I don't find this observation of Djikstra's to be one of his best. If there is a gadget that does a thing that no other gadget does, what does it even mean for the interface to be "terrible?" How can you even know if the interface is terrible, given that a better one has yet to be invented? Maybe the interface is as good as it can be for the tool in question.
I also don't love your mapping of this observation onto modern LLMs. The interface of an LLM is natural language text, along with some files written in plain text or markdown. Can it be improved? Undoubtedly! But as a baseline, it doesn't seem half bad to me. If it is so terrible, it should not be hard to propose an interface that will be significantly more productive. Can you?
I wonder if EWD would have had the same opinion if he were alive today, with every Unicode font having the APL characters immediately available on the screen.
Did he feel the language design was bad, or would having TTF fonts being able to show "rho", "iota", "grade up" have removed one or more of his objections?
One can appreaciate striving for simplicity (a programming language that can be taught and explained with pen and paper), but one must also consider that computers are meta-devices.
Before computers, we could write things only on paper, either with our hands or a typewriter. So, naturally, when computers came about, the way of thinking about programming was very text-driven, with an emphasis on what a typewriter could represent.
But then, code could be written directly with computers, opening up more typesetting possibilities thanks to keyboards not being bound anymore by the mechanical limitations of typewriters. You could add keys and combinations to your heart's desire, and they would be natively digital and unlimited.
Now, with graphics, both 2D and 3D, and a myriad or other HIDs, shouldn't we try to make another cognitive jump?
It's very strange to see handwriting lumped in with typewriting, to be described as limited relative to screens! Iverson notation was a 2D format (both in handwriting and typeset publications) making use of superscripts, subscripts, and vertical stacking like mathematics. It was linearized to allow for computer execution, but the designers described this as making the language more general rather than less:
> The practical objective of linearizing the typography also led to increased uniformity and generality. It led to the present bracketed form of indexing, which removes the rank limitation on arrays imposed by use of superscripts and subscripts.
(https://www.jsoftware.com/papers/APLDesign.htm)
I think this is more true than they realized at that time. The paper describes the outer product, which in Iverson notation was written as a function with a superscript ∘ and in APL became ∘. followed by the function. In both cases only primitive functions were allowed, that is, single glyphs. However, APL's notation easily extends to any function used in an outer product, no matter how long. But Iverson notation would have you write it in the lower half of the line, which would quickly start to look bad.
All those things can be specified in text. Fortress was a language that had the facility to use mathematical notation. Turned out to be not so compelling iirc.
https://en.wikipedia.org/wiki/Fortress_(programming_language...
We do have syntax highlighting these days. And our editors work like hypertext, where I can go to definitions, find usages, get inheritance hierarchies etc. Quite a ways from your suggestion, but also a few steps removed from a type writer.
I think any such leap would have to be a really big one to catch on though, due to inertia. Colorforth is not exactly popular, and I can't think of any other examples.
With LLMs you can write your code by hand drawing a diagram on a touch screen.
This has been possible since Sketchpad in 1963.
We already did, it's natural language. Talk to your computer and get code, aka vibe coding.
Ironically, I think the examples given in the post validate Dijkstra’s points, instead of disproving them, as the author intended.
How so?
I'm struggling to see how Roger's manipulation of the expressions without executing each line validates Dijkstra's point...
"This is easy to understand, see:"
5 lines of completely inscrutable symbols follow.
If you are expecting someone to learn a completely new notational language before you can communicate a basic algorithm, you have gone wrong somewhere.
You could also similarly write down merge sort in lambda calculus, which is interesting as an exercise, but not especially useful as working code, or as a way to explain how merge sort works.
The opening paragraphs about how people enamoured by a shiny gadget will overlook a terrible interface brings immediately to my mind the modern day LLMs.
I don't find this observation of Djikstra's to be one of his best. If there is a gadget that does a thing that no other gadget does, what does it even mean for the interface to be "terrible?" How can you even know if the interface is terrible, given that a better one has yet to be invented? Maybe the interface is as good as it can be for the tool in question.
I also don't love your mapping of this observation onto modern LLMs. The interface of an LLM is natural language text, along with some files written in plain text or markdown. Can it be improved? Undoubtedly! But as a baseline, it doesn't seem half bad to me. If it is so terrible, it should not be hard to propose an interface that will be significantly more productive. Can you?