This original sense of 'call' (deriving from the 'call number' used to organize books and other materials in physical libraries) was also responsible for the coinage of 'compiler', according to Grace Hopper: 'The reason it got called a compiler [around 1952] was that each subroutine was given a "call word", because the subroutines were in a library, and when you pull stuff out of a library you compile things. It's as simple as that.'
let's use conjure for method interactions and reify as a special case when that method is a constructor. the more this sounds like medieval alchemy the more I can get behind it, and I've already got misbehaving daemons
AFAIK there isn't a wizardly joke programming language yet--perhaps that was considered redundant--but you can use "giving" and "taking" if you're a rock-star programmer. :P
to my mind the function has always been the definition of the process and the data what that process, well, applies to. so you apply the function to the data and get an output.
This tripped me up last week when I was reading Futamura’s paper on partial evaluation (i .e., Futamura projections). I’m not used to the “apply” terminology for functions, even though I learned the lambda calculus in grad school over a decade ago.
(though new data is created as a result of running the function, technically this is guaranteed to not affect the inputs due to the function having to be pure)
When running the routine, is it typically the function that changes or the input data that changes?
If it's the same function running on different data, then you are applying the function to the data. If it's the same data running in a different function, then you are applying the data to the function.
I think Library Science has contributed much more to modern computing than we ever realize.
For example, I often bring up images of card catalogs when explaining database indexing. As soon as people see the index card, and then see that there is a wooden case for looking up by Author, a separate case for looking up by Dewey Decimal et. cet. the light goes on.
I’m old enough to have used (book) dictionaries and wooden case card catalogues in the local library. So when I learned about hashmaps/IDictonary a quarter century ago, that’s indeed the image that helped me grok the concept.
However, the metaphor isn’t that educationally helpful anymore. On more than one occasion I found myself explaining how card catalogues or even (book) dictionaries work, only to be met with the reply: “oh, so they’re basically analogue hashmaps”.
A few months ago I was asking myself, why is the "standard" width of a terminal 80 characters? I assumed it had to do with the screen size of some early PCs.
But nope, its because a punch card was 80 characters wide. And the first punch cards were basically just index cards. Another hat tip to the librarians.
I guess this is the computing equivalent of a car being the width of two horse's asses...
And the use of punch cards in computing is (arguably) inspired by the textile industry. Punched cards were used to configure looms starting way back in the 1700s.
For those who aren't already familiar, James Burke in Connections has a great summary/rundown of this technological progression from Jacquard loom to census tabulator to computer punchcard, starting around the 36 minute mark here (though the whole video is worth watching).
One of the teachers at my high school (40 years ago) somehow got permission to offer an entire class revolving around Connections. Several of my friends were taking it, so I decided to as well, and I had to drop band to make it fit.
Both band directors showed up at one of my classes the first day of school, dragged me to an empty room, and browbeat me into returning to band. It was the right choice for my social life, but I did hear great things about that class.
I always think of the indexes in the back of books as the origin of the term in computing. The relationship to "index cards" never even occurred to me!
Yeah, I've got the vocab, I just never associated index cards with that use case because growing up we only ever used index cards for labeling, note-taking, arts and crafts, and flash cards.
Absolutely! I confess I assumed this was explicitly part of how things were taught. With the "projected" attributes in the index being what you would fit on a card. I'm surprised that so many seem to not have any mental model for how this stuff works.
Young people may not have seen a card catalog these days.
I just explain that hard disks are just a continuous list of 1s and 0s, and then ask what we need to do if people want to find anything. People are able to infer the idea of needing some sort of structure.
I'm Finnish and in in Finnish we translate "call" in function context as "kutsua", which when translated back into English becomes "invite" or "summon".
So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".
In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).
It's also separate from the verb for making a phone call, which would be "anrufen".
Interesting! Across the lake in Sweden we do use "anropa" for calling subprograms. I've never heard anyone in that context use "uppropa" which would be the direct translation of aufrufen.
'Summon' implies a bit of eldritch horror in the code, which is very appropriate at times. 'Invite' could also imply it's like a demon or vampire, which also works!
> Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture.[1] It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation.
> "Be, and it is" (Arabic: كُن فَيَكُونُ; kun fa-yakūn) is a Quranic phrase referring to the creation by God′s command.[1][2] In Arabic, the phrase consists of two words; the first word is kun for the imperative verb "be" and is spelled with the letters kāf and nūn. The second word fa-yakun means "it is [done]".[3]
> The phrase at the end of the verse 2:117
> Kun fa-yakūn has its reference in the Quran cited as a symbol or sign of God's supreme creative power. There are eight references to the phrase in the Quran:[1]
> … if, as a result of some error on the part of the programmer, the order Z F does not get overwritten, the machine will stop at once. This could happen if the subroutine were not called in correctly.
> It will be noted that a closed subroutine can be called in from any part of the program, without restriction. In particular, one subroutine can call in another subroutine.
See also the program on page 33.
The Internet Archive has the 1957 edition of the book, so I wasn’t sure if this wording had changed since the 1951 edition. I couldn’t find a paper about EDSAC from 1950ish that’s easily available to read, but [here’s a presentation with many pictures of artefacts from EDSAC’s early years](https://chiphack.org/talks/edsac-part-2.pdf). It has a couple of pages from the 1950 “report on the preparation of programmes for the EDSAC and the use of the library of subroutines” which shows a subroutine listing with a comment saying “call in auxiliary sub-routine”.
I agree, it looks like this 1951 source is using "call in" to mean "invoke" — the actual transfer of control — as opposed to "load" or "link in." Which means this 1951 source agrees with Sarbacher (1959), and is causing me right now to second-guess my interpretation of the MANIAC II (1956) and Fortran II (1958) sources — could it be that they were also using "call in" to mean "invoke" rather than "indicate a dependency on"? Did I exoticize the past too much, by assuming the preposition in "call in" must be doing some work, meaning-wise?
In Excel formulas everything is a function. IF, AND, OR, NOT are all functions. It is awkward and goes against what software devs are familiar with, but there are probably more people familiar with the Excel IF function than any other forms. Here is an example taken from the docs... =IF(AND(A3>B2,A3<C2),TRUE,FALSE)
Sounds like something Prof. John Ousterhout would say:-; The place where this was literally accurate would be Tcl.
I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
Also Forth comes to mind, but that would probably be a stretch.
> I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
If takes two or three arguments, but never one. The condition is the one made syntactically obvious in most languages, the consequent is another required argument, and the alternative is optional.
They aren't talking about C and its descendants in particular, but more generally. For example in Haskell and Scheme there is only an if function and no if statement. And you're welcome to create an if function in any language you like and use it instead of the native syntax. I like to use an if function in PostgreSQL because it's less cumbersome than a case expression.
So in the abstract, if is a ternary function. I think the original comment was reflecting on how "if (true) ... " looks like a function call of one argument but that's obviously wrong.
this is not quite right. haskell and scheme have if expressions, not if statements. that's not the same as if being a function. if is not, and cannot be, a function in scheme, as it does not have scheme function semantics. specifically, it is not strict, as it does not evaluate all its subexpressions before executing. since haskell is non-strict, if can be implemented as a function, and iirc it is
> since haskell is non-strict, if can be implemented as a function, and iirc it is
"If" can be implemented as a function in Haskell, but it's not a function. You can't pass it as a higher-order function and it uses the "then" and "else" keywords, too. But you could implement it as a function if you wanted:
if' :: Bool -> a -> a
if' True x _ = x
if' False _ y = y
Then instead of writing something like this:
max x y = if x > y then x else y
You'd write this:
max x y = if' (x > y) x y
But the "then" and "else" remove the need for parentheses around the expressions.
Depends on the language! If "if" wasn't a keyword, in Ruby that would be calling a method that takes one positional argument and one block argument, such as `def if(cond, &body) = cond && body.call`. In PureScript that could be a call to a function with signature `if :: Boolean -> Record () -> _`.
But I assume the comment you were replying to was not referring to the conditional syntax from C-like languages, instead referring to a concept of an if "function", like the `ifelse` function in Julia [1] or the `if` form in Lisps (which shares the syntax of a function/macro call but is actually a special form) [2], neither of which would make sense as one argument function.
Try implementing that in most languages and you'll run into problems.
In an imperative programming language with eager evaluation, i.e. where arguments are evaluated before applying the function, implementing `if` as a function will evaluate both the "then" and "else" alternatives, which will have undesirable behavior if the alternatives can have side effects.
In a pure but still eager functional language this can work better, if it's not possible for the alternatives to have side effects. But it's still inefficient, because you're evaluating expressions whose result will be discarded, which is just wasted computation.
In a lazy functional language, you can have a viable `if` function, because it will only evaluate the argument that's needed. But even in the lazy functional language Haskell, `if` is implemented as built-in syntax, for usability reasons - if the compiler understands what `if` means as opposed to treating it as an ordinary function, it can optimize better, produce better messages, etc.
In a language with the right kind of macros, you can define `if` as a macro. Typically in that case, its arguments might be wrapped in lambdas, by the macro, to allow them to be evaluated only as needed. But Scheme and Lisp, which have the right kind of macros, don't define `if` as a macro for similar reasons to Haskell.
One language in which `if` is a function is the pure lambda calculus, but no-one writes real code in that.
The only "major" language I can think of in which `if` is actually a function (well, a couple of methods) is Smalltalk, and in that case it works because the arguments to it are code blocks, i.e. essentially lambdas.
tl;dr: `if` as a function isn't practical in most languages.
Isn't practical in Smalltalk either, so the compiler does something special:
ifFalse: alternativeBlock
"Answer the value of alternativeBlock. Execution does not actually
reach here because the expression is compiled in-line."
^alternativeBlock value
I don't think Haskell needs 'if' to be a construct for compiler optimization reasons; it could be implemented easily enough with pattern matching:
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
The compiler could substitute this if it knew the first argument was a constant.
Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet. The early versions of Haskell had pretty terrible I/O, too.
With a function version of `if`, in general the compiler needs to wrap the alternative in closures ("thunks"), as it does with all function arguments unless optimizations make it unnecessary. That's never needed in the syntactic version. That's one significant optimization.
In GHC, `if` desugars to a case statement, and many optimizations flow from that. It's pretty central to the compiler's operation.
> Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet.
Neither of these are true. My comment above was attempting to explain why `if` isn't implemented as a function. Haskell is a prime example of where it could have been done that way, the authors are fully aware of that, but they didn't because the arguments against doing it are strong. (Unless you're implementing a scripting-language type system where you don't care about optimization.)
The post claims that this is done in such a basic way that if you have managed to rebind `ifThenElse`, your rebound function gets called. I didn't confirm this, but I believed it.
>Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
From my own experience, native speakers (who are beginners at programming) also do this. They also describe all kinds of things as "commands" that aren't.
"Command" is a better term for what we call "statements" in imperative programming languages. "Statement" in this context is an unfortunate historical term; except in Prolog, these "statements" don't have a truth-value, just an effect. (And in Prolog we call them "clauses" instead.)
In many early computer programming documents the term "order" was used instead of "statement", where "order" was meant as a synonym for "command" and not as referring to the ordering of a sequence.
Yes, but that is mostly because in the first few years (including by the time of Mauchly), there were no "high-level" programming languages, so the "orders" composing the text of a program corresponded to instructions directly executable by the machine.
I believe that the term "statement" has been imposed by the IBM publications about FORTRAN, starting in 1956.
Before the first public documents about IBM FORTRAN, the first internal document about FORTRAN, from 1954, had used the terms "formula" for anything that later would be called "executable statement", i.e. for many things that would not have been called formulas either before or after that, like IF-formulas, DO-formulas, GOTO-formulas and so on, and the document had used "sentence" for what later would be called "non-executable statements" (i.e. definitions or declarations).
Before FORTRAN (1951 to 1953), for his high-level programming language Heinz Rutishauser had used the term "Befehl", which means "command". (For what we name today "program", he had used the term "Rechenplan", which means "computation plan".)
I have named the experimental languages I am toying with "Plan <x>" (for Programming LANguage, but also because it is a good term as you pointed out). Originally I was using numbers (probably would skip "Plan 9" like Microsoft skipped Windows 9) but the experiments went in different directions and implying an order was misleading. So I switched to star names: Plan Sirius, Plan Rigel, Plan Vega...
Is it? "Statement", defined by the dictionary as "the expression of an idea or opinion through something other than words.", seems quite apt. Symbols may end up resembling words, which perhaps is your contention, but technically they are only symbols.
Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
> Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
The defining characteristic of a programming "statement" is that it can perform some action (even if not all of them do), whereas statements in the usual everyday sense are inert. So it's not a good term.
> The defining characteristic of a programming "statement" is that it can perform some action
Given a declaration "statement" such as:
int x;
What is the expected action? Memory allocation... I guess? Does any compiler implementation actually do that? I believe, in the age of type systems being all the rage, one would simply see that as a type constraint, not as a command to act upon.
Expressing an idea (the values used henceforth should be integers) seems more applicable, no?
In a language that requires you to declare variables before you use them, it clearly does something - you couldn't do "x = 5;" before, and now you can. If you're trying to understand the program operationally (and if you're not then why are you thinking about statements?) it means something like "create a variable called x", even if the implementation of that is a no-op at the machine code level.
> I believe, in the age of type systems being all the rage, one would simply see that as a type constraint, not as a command to act upon.
But in that case by definition one would not be seeing it as a statement! Honestly to me it doesn't really matter whether "int x;" is an expression, a statement, or some mysterious third thing (in the same way that e.g. forward declaring a function isn't a statement or an expression). When we're talking about the distinction between statements and expressions we're talking primarily about statements like "x = x + 1;", which really can't be understood as a statement in the everyday English sense.
> Memory allocation... I guess? Does any compiler implementation actually do that?
Well, and linguistics. A "statement" in the grammatical sense is a sentence that is declarative in form (as opposed to, in English, interrogative, imperative, or exclamatory) and which thus ostensibly has a truth-value.
Now that it's commonly "tap or click the button" I might be down with the next gen using "call". Anything, as long as they don't go with "broh, the button".
Not a scientific theory, but an observation. New words propagate when they "click". They are often short, and for one reason or another enable people to form mental connections and remember what they mean. They spread rapidly between people like a virus. Sometimes they need to be explained, sometimes people get it from context, but afterward people tend to remember them and use them with others, further propagating the word.
A fairly recent example, "salty". It's short, and kinda feels like it describes what it means (salty -> tears -> upset).
It sounds like "call" is similar. It's short, so easy to say for an often used technical term, and there are a couple of ways it can "feel right": calling up, calling in, summoning, invoking (as a magic spell). People hear it, it fits, and the term spreads. I doubt there were be many competing terms, because terms like "jump" would have been in use to refer to existing concepts. Also keep in mind that telephones were hot, magical technology that would have become widespread around this same time period. The idea of being able to call up someone would be at the forefront of people's brains, so contemporary programmers would likely have easily formed a mental connection/analogy between calling people and calling subroutines.
Side-note: for me, at least, "salty" isn't anything to do with tears; in my idiolect when someone's "salty" it doesn't mean they're sad, it means they're angry or offended or something along those lines. The metaphor is more about how salt (in large quantities) tastes strong and sharp.
(Which maybe illustrates that a metaphor can succeed even when everyone doesn't agree about just what it's referring to, as you're suggesting "call" may have done.)
This is actually a great example. I think for a lot of these words, everyone has a different interpretation, but somehow it ends up working for everyone. It's not important that everyone remembers them the same way, but that everyone feel like the word fits even they have different independent interpretations.
True enough, there are endless subtleties to language (English in particular) that make words simultaneously vague and extremely specific.
Wiktionary defines bitter as "cynical and resentful", which doesn't quite capture the "more longer-lasting, somewhat less emotional condition" part of it.
> ... but those of any complexity presumably ought to be in a library — that is, a set of magnetic tapes in which previously coded problems of permanent value are stored.
Oddly, I never thought of the term library as originating from a physical labelled and organized shelf of tapes, until now.
At https://youtu.be/DjhRRj6WYcs?t=338 you can see EDSAC's original linker, Margaret Hartrey, taking a library subroutine from the drawer of paper tapes. (But you should really watch the whole thing, of course!)
I don't see .lib being all that common, but it might just be what I'm used to. `.so` or `.dll` or such sure (though to be fair, the latter does include the word library.)
.lib is the traditional extension for static libraries and import libraries on Windows. Every .dll has an accompanying .lib. (Msys2 uses their own extensions, namely .a for static libraries and .dll.a for import libraries.)
It's not _that_ they are called libraries, but _why_ they are called libraries. I had assumed, like many others that it was purely by analogy (ie, a desktop), and not that the term originated with a physical library of tapes.
I always thought it was just a metaphor for suddenly leaving whatever you were doing at the time. You’re doing something else, and then you jump on a call. You don’t mosey on over and show up on the call fifteen minutes later.
That has nothing to do with the video aspect, but the group aspect. "Jump", "Hop" and the like are making a group call analogous to a bus ride, where people can jump on and off.
I always thought that the functions did not need a call keyword, as they normally would return a value, so that functions would appear in an assignment. So one just uses the function.
What needed a CALL was a subroutine, which effectively was a named address/label.
Indeed it would be just as possible to GOTO the address/label and then GOTO back. CALL keyword made the whole transaction more comprehensive.
So in a sense it was similar to calling up someplace using the address number. Often times this would change some shared state so that the caller would then proceed after the call. Think of it as if a 'boss' first calls Sam to calculate the figures, then calls Bill to nicely print the TPS report.
Eventually everything became a function and subroutines were associated with spaghetti...
Now, why is that it's called routine (aka program) and subroutine?
Well, apparently [0],
in a 1947 document "Planning and Coding Problems for an Electronic Computing Instrument, Part 1" by H. Goldstine and J. von Neumann it is stated:
"We call the coded sequence of a problem a routine"
Algol 60 also uses the word "call" for parameters as well as functions. It introduced (?) the terms "call by value" and "call by name". For example, in 4.7.5.3: "In addition if the formal parameter is called by value the local array created during the call will have the same. subscript bounds as the actual array."
In modern terminology, we call procedures/functions/subroutines and pass arguments/parameters, so "pass by (value|name|reference)" is clearer than "call by (value|name|reference)". But the old terms "call by value" et al have survived in some contexts, though the idea of "calling" an argument or parameter has not.
It's interesting to think of "calling" as "summoning" functions. We could also reasonably say "instantiating", "evaluating", "computing", "running", "performing" (as in COBOL), or simply "doing".
In Mauchly's "Preparation of Problems for EDVAC-Type Machines", quoted in part in the blog post, he writes:
> The total number of operations for which instructions must be provided will
usually be exceedingly large, so that the instruction sequence would be far in excess of the
internal memory capacity. However, such an instruction sequence is never a random sequence,
and can usually be synthesized from subsequences which frequently recur.
> By providing the necessary subsequences, which may be utilized as often as desired, together
with a master sequence directing the use of these subsequences, compact and easily set up
instructions for very complex problems can be achieved.
The verbs he uses here for subroutine calls are "utilize" and "direct". Later in the paper he uses the term "subroutine" rather than "subsequence", and does say "called for" but not in reference to the subroutine invocation operation in the machine:
> For these,
magnetic tapes containing the series of orders required for the operation can be prepared once
and be made available for use when called for in a particular problem. In order that such
subroutines, as they can well be called, be truly general, the machine must be endowed with
the ability to modify instructions, such as placing specific quantities into general subroutines.
Thus is created a new set of operations which might be said to form a calculus of instructions.
Of course nowadays we do not pass arguments to subroutines by modifying their code, but index registers had not yet been invented, so every memory address referenced had to be contained in the instructions that referenced it. (This was considered one of the great benefits of keeping the program in the data memory!)
A little lower down he says "initiate subroutines" and "transferring control to a subroutine", and talks about linking in subroutines from a "library", as quoted in the post.
He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines. He does talk about mathematical functions being computed by subroutines, including things like matrix multiplication:
> If the subroutine is merely to calculate a
function for a single argument, (...)
"He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines."
I think the early BASIC's used the subroutine nomenclature for GOSUB, where there was no parameter passing or anything, just a jump that automatically remembered the place to return.
Functions in BASIC, as I remember it, were something quite different. I think they were merely named abbreviations for arithmetic expressions and simple one line artithmetic expressions only. They were more similar to very primitive and heavily restricted macros than to subroutines or functions.
FORTRAN had both functions and subroutines. A function returned a value and was invoked in an expression (eg. S=SIN(A)). A subroutine was invoked by calling it (eg. CALL FOPEN(FNAME, PERMS)).
In the flang-new compiler, which builds a parse tree for the whole source file before processing any declarations, it was necessary to parse such things as statement functions initially so that further specification statements could follow them. Later, if it turns out that the function name is an array or regular function returning a pointer, the parse tree gets patched up in place and the statement becomes the first executable statement.
Another use of call is in barn / folk / country dancing where a caller will call out the moves. “Swing your partner!”, “Dosey do!”, “Up and down the middle!” etc. Each of these calls describes a different algorithmic step. However, it’s unlikely this etymology has anything to do with function calling: each call modifies the global state of the dancers with no values returned to the caller.
If the librarian's 'call for' meaning was indeed the one originally intended, then even in Mauchly's 1947 article you can already see slippage towards the more object-oriented or actor-oriented 'call to' meaning.
You wrote "If the librarian's 'call for' meaning was indeed the one originally intended"
I doubt that "the librarian's 'call for' meaning was indeed the one originally intended" (not that you say it!).
Or maybe there is no difference between what you mean by "call for" and what the first quote meant by "call in". The subroutine is called / retrieved - and control is transferred to that subroutine. (I wouldn't say that when doctors are called to see a patient, for example, they are called in the librarian's meaning.)
I love this sort of cs history. I’m also curious—why do we “throw” an error or “raise” an exception? Why did the for loop use “for” instead of, say, “loop”?
It's been ages, but I think an earlier edition of Stroustrup's The C++ Programming Language explains that he specifically chose "throw" and "catch" because more obvious choices like "signal" were already taken by established C programs and choosing "unusual" words (like "throw" and "catch") reduced chance of collision. (C interoperability was a pretty big selling point in the early days of C++.)
The design of exception handling in C++ was inspired by ML, which used 'raise', and C++ might itself have used that word, were it not already the name of a function in the C standard library (as was 'signal'). The words 'throw' and 'catch' were introduced by Maclisp, which is how Stroustrup came to know of them. As he told Guy Steele at the second ACM History of Programming Languages (HOPL) conference in 1993, 'I also think I knew whatever names had been used in just about any language with exceptions, and "throw" and "catch" just were the ones I liked best.'
"Exception" comes from hardware/state machines, and the name reflects that they may not be error cases. For instance, if an embedded device is waiting for a button press, then pressing that button will put the MCU into an exceptional/special state (eg interrupting execution or waking up from sleep).
But surely those days are in the past. We should call them objections, because they're always used for errors and never for control flow. [1] After all, that would be an unconditional jump, and we've already settled that those are harmful.
I think “raise” comes from the fact that the exception propagates “upward” through the call stack, delegating the handling of it to the next level “up.” “Throw” may have to do with the idea of not knowing what to do/how to handle an error case, so you just throw it away (or throw your hands up in frustration xD). Totally just guessing
I suspect it comes from raising flags/signals (literally as one might run a flag up a flag pole?) to indicates CPU conditions, and then that terminology getting propagated from hw to sw.
idk because there are some circles in which boolean variables are called flags but I've never seen them referred to as being raised or unraised/lowered, only set and unset
That's a great question. The first language I learned was python, and "for i in range(10)" makes a lot of sense to me. But "for (int i = 0; i < 10; i++)" must have come first, and in that case "for" is a less obvious choice.
FORTRAN IV, at least the version I used on the PDP-11 running RSX, did not have a DO-loop. Just IF and GO TO. But it did have both logical and arithmetic IF.
I seem to remember people used to say "call it up" when asking an operator to perform a function on a computer when the result was displayed in front of the user.
reminder that the author is a convicted rapist. for more details skip to the relevant section of the following article: https://izzys.casa/2024/11/on-safe-cxx/
The answer is in the article. You "call" functions because they are stored in libraries, like books, and like books in libraries, when you want them, you identify which one you want by specifying its "call number".
Another answer also in the article is that you call them like you call a doctor to your home, so they do something for you, or how people are “on call”.
The “call number” in that story comes after the “call”. Not the other way around.
I'd like to point at CALL, a CPU instruction, and its origins. I'm not familiar with this, but it could reveal more than programming languages. The instruction is present at least since first intel microprocessors and microcontrollers were designed.
> Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked...
The first Intel microcontrollers were the 8008 and the 4004, designed in 01971. This is 13 years after this post documents the "CALL X" statement of FORTRAN II in 01958, shortly after which (he documents) the terminology became ubiquitous. (FORTRAN I didn't have subroutines.)
> A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process with a fixed ordered set of input and output parameters, permanently defined by a procedure declaration. (cf. procedure declaration.)
Note that this does go to extra pains to include the term "call for", but does not use the phraseology "call a procedure". Rather, it calls for not the procedure itself, but the execution of the procedure.
However, it also uses the term "procedure call" to describe either the initiation or the execution of the procedure:
> The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters occupying input and output parameter positions there give complete information concerning the admissibility of parameters used in any procedure call, (...)
Algol 58 has a different structure for defining functions rather than procedures, but those too are invoked by a "function call"—but not by "calling the function".
I'm not sure when the first assembly language with a "call" instruction appeared, but it might even be earlier than 01958. The Burroughs 5000 seems like it would be a promising thing to look at. But certainly many assembly languages from the time didn't; even MIX used a STJ instruction to set the return address in the return instruction in the called subroutine and then just jumped to its entry point, and the PDP-10 used PUSHJ IIRC. The 360 used BALR, branch and link register, much like RISC-V's JALR today.
algol 60 implemention entailed debate over "call by value" and "call by name", though I can't say I know the when/where origin of those precise phrases, it seems a good place to look.
This original sense of 'call' (deriving from the 'call number' used to organize books and other materials in physical libraries) was also responsible for the coinage of 'compiler', according to Grace Hopper: 'The reason it got called a compiler [around 1952] was that each subroutine was given a "call word", because the subroutines were in a library, and when you pull stuff out of a library you compile things. It's as simple as that.'
I invoke them :]
I fondly remember blessing objects in perl.
https://perldoc.perl.org/functions/bless
Shame we can't use `cast`, that's already being used for types. And `conjure` probably only works for object constructors.
let's use conjure for method interactions and reify as a special case when that method is a constructor. the more this sounds like medieval alchemy the more I can get behind it, and I've already got misbehaving daemons
And 'summon' is just used for demons.
A keyword exclusively used for network calls, in particular microservices ahaahaha
AFAIK there isn't a wizardly joke programming language yet--perhaps that was considered redundant--but you can use "giving" and "taking" if you're a rock-star programmer. :P
https://codewithrockstar.com/
Erlang/Elixir use "cast" method name when sending messages to their GenServer actor processes.
There are two terms.
* call - to send and await a reply * cast - to send and not await a reply
I bind my functions before I apply them
Synonyms of "invoke" include "call forth" and "conjure up."
Or a "call sheet", which is the list of cast and crew needed for a particular film shoot
The functional peeps even `apply` them.
I've never been quite sure when I'm applying data to a function, or applying a function to some data
to my mind the function has always been the definition of the process and the data what that process, well, applies to. so you apply the function to the data and get an output.
This tripped me up last week when I was reading Futamura’s paper on partial evaluation (i .e., Futamura projections). I’m not used to the “apply” terminology for functions, even though I learned the lambda calculus in grad school over a decade ago.
Is the data changing or the function changing?
In a functional language, neither
(though new data is created as a result of running the function, technically this is guaranteed to not affect the inputs due to the function having to be pure)
(perhaps this is excessively pedantic)
When running the routine, is it typically the function that changes or the input data that changes?
If it's the same function running on different data, then you are applying the function to the data. If it's the same data running in a different function, then you are applying the data to the function.
let's just map pedantic->precise and call it a day :)
Same here, but I will say "a function call", not "a function invocation".
Invoking X sounds deliciously alchymistic, by the way.
I just connected the dots... The identifier digits in the Dewey Decimal classification are called "call numbers" !
Yes, that's in the second paragraph of the article.
I took this to be a pun on "decimal" and "connecting the dots" but perhaps I'm just wired to see puns where they weren't necessarily intended.
I think Library Science has contributed much more to modern computing than we ever realize.
For example, I often bring up images of card catalogs when explaining database indexing. As soon as people see the index card, and then see that there is a wooden case for looking up by Author, a separate case for looking up by Dewey Decimal et. cet. the light goes on.
https://en.wikipedia.org/wiki/Library_catalog
I’m old enough to have used (book) dictionaries and wooden case card catalogues in the local library. So when I learned about hashmaps/IDictonary a quarter century ago, that’s indeed the image that helped me grok the concept.
However, the metaphor isn’t that educationally helpful anymore. On more than one occasion I found myself explaining how card catalogues or even (book) dictionaries work, only to be met with the reply: “oh, so they’re basically analogue hashmaps”.
A few months ago I was asking myself, why is the "standard" width of a terminal 80 characters? I assumed it had to do with the screen size of some early PCs.
But nope, its because a punch card was 80 characters wide. And the first punch cards were basically just index cards. Another hat tip to the librarians.
I guess this is the computing equivalent of a car being the width of two horse's asses...
And the use of punch cards in computing is (arguably) inspired by the textile industry. Punched cards were used to configure looms starting way back in the 1700s.
For those who aren't already familiar, James Burke in Connections has a great summary/rundown of this technological progression from Jacquard loom to census tabulator to computer punchcard, starting around the 36 minute mark here (though the whole video is worth watching).
https://youtu.be/z6yL0_sDnX0?si=NtyyybZSGCKmktdG&t=2150
One of the teachers at my high school (40 years ago) somehow got permission to offer an entire class revolving around Connections. Several of my friends were taking it, so I decided to as well, and I had to drop band to make it fit.
Both band directors showed up at one of my classes the first day of school, dragged me to an empty room, and browbeat me into returning to band. It was the right choice for my social life, but I did hear great things about that class.
I always think of the indexes in the back of books as the origin of the term in computing. The relationship to "index cards" never even occurred to me!
Index cards are not different from index entries in a book. Index is “indicator” or “pointer” in Latin (hence the name of the finger).
Yeah, I've got the vocab, I just never associated index cards with that use case because growing up we only ever used index cards for labeling, note-taking, arts and crafts, and flash cards.
A year or two ago I explained the dusty wooden drawers in the corner of the library using a database analogy.
Context and preconceptions are everything!
Absolutely! I confess I assumed this was explicitly part of how things were taught. With the "projected" attributes in the index being what you would fit on a card. I'm surprised that so many seem to not have any mental model for how this stuff works.
Young people may not have seen a card catalog these days.
I just explain that hard disks are just a continuous list of 1s and 0s, and then ask what we need to do if people want to find anything. People are able to infer the idea of needing some sort of structure.
Even if it did contribute more, it still contributed an absolutely miniscule amount to modern computing.
I'm Finnish and in in Finnish we translate "call" in function context as "kutsua", which when translated back into English becomes "invite" or "summon".
So at least in Finnish the word "call" is considered to mean what it means in a context like "a mother called her children back inside from the yard" instead of "call" as in "Joe made a call to his friend" or "what do you call this color?".
Just felt like sharing.
In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).
It's also separate from the verb for making a phone call, which would be "anrufen".
Interesting! Across the lake in Sweden we do use "anropa" for calling subprograms. I've never heard anyone in that context use "uppropa" which would be the direct translation of aufrufen.
Same in Dutch. “Oproepen” means “to summon”. We would use “aanroepen”.
I had always assumed it meant call as in to call up or call over. I'd never considered that people may think it meant call as in name
In russian it's kind of similar, back translation is "call by phone", "summon", "invite".
'Summon' implies a bit of eldritch horror in the code, which is very appropriate at times. 'Invite' could also imply it's like a demon or vampire, which also works!
An interesting aside and/or follow-on:
> Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture.[1] It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation.
https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...
Another one:
> "Be, and it is" (Arabic: كُن فَيَكُونُ; kun fa-yakūn) is a Quranic phrase referring to the creation by God′s command.[1][2] In Arabic, the phrase consists of two words; the first word is kun for the imperative verb "be" and is spelled with the letters kāf and nūn. The second word fa-yakun means "it is [done]".[3]
> (image of verse 2:117) https://commons.wikimedia.org/wiki/File:002117_Al-Baqrah_Urd...
> The phrase at the end of the verse 2:117 > Kun fa-yakūn has its reference in the Quran cited as a symbol or sign of God's supreme creative power. There are eight references to the phrase in the Quran:[1]
https://en.wikipedia.org/wiki/Be,_and_it_is
I wonder if “Be” would be imperative or functional. Is “Be” another name for `Unit()`? Or, would it be more Lisp-like `(be unit)`?
“be” was a reserved keyword in early Rust, intended to be used in place of “return” (or “ret”, as it was spelled at the time) for tail calls.
In Norway it is «funksjonskall», or literally function call. And the «kall» / «call» is just that, a call for something.
Unrelated, but if you happen to be in Helsinki, you should join the local Hacker News meetup: https://bit.ly/helsinkihn
[Wilkes, Wheeler, Gill](https://archive.org/details/programsforelect00wilk) [1951] uses the phrase “call in” to invoke a subroutine.
Page 31 has:
> … if, as a result of some error on the part of the programmer, the order Z F does not get overwritten, the machine will stop at once. This could happen if the subroutine were not called in correctly.
> It will be noted that a closed subroutine can be called in from any part of the program, without restriction. In particular, one subroutine can call in another subroutine.
See also the program on page 33.
The Internet Archive has the 1957 edition of the book, so I wasn’t sure if this wording had changed since the 1951 edition. I couldn’t find a paper about EDSAC from 1950ish that’s easily available to read, but [here’s a presentation with many pictures of artefacts from EDSAC’s early years](https://chiphack.org/talks/edsac-part-2.pdf). It has a couple of pages from the 1950 “report on the preparation of programmes for the EDSAC and the use of the library of subroutines” which shows a subroutine listing with a comment saying “call in auxiliary sub-routine”.
(Blog author here.) Nice find! I'll try to incorporate that into the post at some point. The 1951 first edition is also on archive.org (borrowable with a free login account): https://archive.org/details/preparationofpro0000maur/page/32...
I agree, it looks like this 1951 source is using "call in" to mean "invoke" — the actual transfer of control — as opposed to "load" or "link in." Which means this 1951 source agrees with Sarbacher (1959), and is causing me right now to second-guess my interpretation of the MANIAC II (1956) and Fortran II (1958) sources — could it be that they were also using "call in" to mean "invoke" rather than "indicate a dependency on"? Did I exoticize the past too much, by assuming the preposition in "call in" must be doing some work, meaning-wise?
Somewhat less frequently, I also hear "invoke" or "execute", which is more verbose but also more generic.
Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
Invoke comes from Latin invocō, invocāre, meaning “to call upon”. I wouldn’t view it as a misuse, but rather a shortening.
> Invoke comes from Latin invocō, invocāre, meaning “to call upon”.
(In the way you'd call upon a skill, not in the way you'd call upon a neighbor.)
But vocare (the voco in invoco) is how you'd call a neighbor
Which fits nicely for calling a function - you use its skill, you don't call for a chat.
It's calling a person like by saying their name loudly.
> strange misuses of "call"
My favourite (least favourite?) is using “call” with “return”. On more than one occasion I’ve heard:
“When we call the return keyword, the function ends.”
I remember someone in university talking about the if function (which ostensibly takes one boolean argument).
In Excel formulas everything is a function. IF, AND, OR, NOT are all functions. It is awkward and goes against what software devs are familiar with, but there are probably more people familiar with the Excel IF function than any other forms. Here is an example taken from the docs... =IF(AND(A3>B2,A3<C2),TRUE,FALSE)
Excel cell formulas are the most widely used functional programming language in the world.
Yes I stand corrected - we were using C so definitely not a function there.
Sounds like something Prof. John Ousterhout would say:-; The place where this was literally accurate would be Tcl.
I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
Also Forth comes to mind, but that would probably be a stretch.
> I don't know enough Smalltalk to be sure but I think to remember it has a similar approach of everything is an object and I wouldn't be surprised if they'd coerced control flow somehow into this framework.
It does. It's been discussed on HN before, even: https://news.ycombinator.com/item?id=13857174
Except
https://news.ycombinator.com/item?id=44513639
I would include the cond function from lisp, or the generalization from lambda calculus
There are languages in which `if` is a function.
In in Tcl, `if` is called a "command".
Yes I stand corrected - we were using C so definitely not a function there.
Also in Smalltalk and sclang (Supercollider language)
Or anything Lispy
If takes two or three arguments, but never one. The condition is the one made syntactically obvious in most languages, the consequent is another required argument, and the alternative is optional.
Huh? if (true) {} takes precisely one argument.
That's an application of `if` with one of the arguments empty.
The semantics of `if` requrie at least, `if(cond, clause)`, though more generally, `if(cond, clause, else-clause)`
You and Zambyte are both doing the same thing the top level comment is complaining about.
e.g. in C:
https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf
in C++:https://eel.is/c++draft/gram.stmt
where More examples:https://docs.python.org/3/reference/grammar.html
https://doc.rust-lang.org/reference/expressions/if-expr.html...
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
expression != argument
They aren't talking about C and its descendants in particular, but more generally. For example in Haskell and Scheme there is only an if function and no if statement. And you're welcome to create an if function in any language you like and use it instead of the native syntax. I like to use an if function in PostgreSQL because it's less cumbersome than a case expression.
So in the abstract, if is a ternary function. I think the original comment was reflecting on how "if (true) ... " looks like a function call of one argument but that's obviously wrong.
this is not quite right. haskell and scheme have if expressions, not if statements. that's not the same as if being a function. if is not, and cannot be, a function in scheme, as it does not have scheme function semantics. specifically, it is not strict, as it does not evaluate all its subexpressions before executing. since haskell is non-strict, if can be implemented as a function, and iirc it is
> since haskell is non-strict, if can be implemented as a function, and iirc it is
"If" can be implemented as a function in Haskell, but it's not a function. You can't pass it as a higher-order function and it uses the "then" and "else" keywords, too. But you could implement it as a function if you wanted:
Then instead of writing something like this: You'd write this: But the "then" and "else" remove the need for parentheses around the expressions.Arguments are expressions in Haskell. In abstract, it uses expressions.
Depends on the language! If "if" wasn't a keyword, in Ruby that would be calling a method that takes one positional argument and one block argument, such as `def if(cond, &body) = cond && body.call`. In PureScript that could be a call to a function with signature `if :: Boolean -> Record () -> _`.
But I assume the comment you were replying to was not referring to the conditional syntax from C-like languages, instead referring to a concept of an if "function", like the `ifelse` function in Julia [1] or the `if` form in Lisps (which shares the syntax of a function/macro call but is actually a special form) [2], neither of which would make sense as one argument function.
[1] https://docs.julialang.org/en/v1/base/base/#Base.ifelse
[2] https://www.gnu.org/software/emacs/manual/html_node/elisp/Co...
I frequently see people treating if as if it was "taking a comparison", so: if (variable == true) ...
if should be a function, though sadly many languages aren't good enough to express it and have to make it a builtin.
Try implementing that in most languages and you'll run into problems.
In an imperative programming language with eager evaluation, i.e. where arguments are evaluated before applying the function, implementing `if` as a function will evaluate both the "then" and "else" alternatives, which will have undesirable behavior if the alternatives can have side effects.
In a pure but still eager functional language this can work better, if it's not possible for the alternatives to have side effects. But it's still inefficient, because you're evaluating expressions whose result will be discarded, which is just wasted computation.
In a lazy functional language, you can have a viable `if` function, because it will only evaluate the argument that's needed. But even in the lazy functional language Haskell, `if` is implemented as built-in syntax, for usability reasons - if the compiler understands what `if` means as opposed to treating it as an ordinary function, it can optimize better, produce better messages, etc.
In a language with the right kind of macros, you can define `if` as a macro. Typically in that case, its arguments might be wrapped in lambdas, by the macro, to allow them to be evaluated only as needed. But Scheme and Lisp, which have the right kind of macros, don't define `if` as a macro for similar reasons to Haskell.
One language in which `if` is a function is the pure lambda calculus, but no-one writes real code in that.
The only "major" language I can think of in which `if` is actually a function (well, a couple of methods) is Smalltalk, and in that case it works because the arguments to it are code blocks, i.e. essentially lambdas.
tl;dr: `if` as a function isn't practical in most languages.
Isn't practical in Smalltalk either, so the compiler does something special:
Oh thanks, I didn't know that. I thought it just relied on the explicit code blocks.
But yeah, this is a pretty critical point for optimizations - any realistic language is likely to optimize this sooner or later.
I don't think Haskell needs 'if' to be a construct for compiler optimization reasons; it could be implemented easily enough with pattern matching:
if' :: Bool -> a -> a -> a
if' True x _ = x
if' False _ y = y
The compiler could substitute this if it knew the first argument was a constant.
Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet. The early versions of Haskell had pretty terrible I/O, too.
With a function version of `if`, in general the compiler needs to wrap the alternative in closures ("thunks"), as it does with all function arguments unless optimizations make it unnecessary. That's never needed in the syntactic version. That's one significant optimization.
In GHC, `if` desugars to a case statement, and many optimizations flow from that. It's pretty central to the compiler's operation.
> Maybe it was needed in early versions. Or maybe they just didn't know they wouldn't need it yet.
Neither of these are true. My comment above was attempting to explain why `if` isn't implemented as a function. Haskell is a prime example of where it could have been done that way, the authors are fully aware of that, but they didn't because the arguments against doing it are strong. (Unless you're implementing a scripting-language type system where you don't care about optimization.)
A short search lead to this SE post [1], which doesn't answer the "why" but says "if" is just syntactic sugar that turns into `ifThenElse`...
[1]: https://softwareengineering.stackexchange.com/questions/1957...
The post claims that this is done in such a basic way that if you have managed to rebind `ifThenElse`, your rebound function gets called. I didn't confirm this, but I believed it.
Some people use parentheses for the return value, to make it look like a function call:
Eh, "return" is just a very restricted continuation with special syntax… it's a stretch to say you "call" it, but not unjustified.
I've heard that too --- the voice in my head automatically read it in the customary thick Indian accent.
C# seems to like to use "Invoke" for things like delegates or reflected methods. Then it proceeds to use "Call Stack" in the debugger view.
Microsoft devs get paid by the character, I'm not sure that counts.
>Incidentally, I find strange misuses of "call" ("calling a command", "calling a button") one of the more grating phrases used by ESL CS students.
From my own experience, native speakers (who are beginners at programming) also do this. They also describe all kinds of things as "commands" that aren't.
I actually see the converse often with novices often, referring to statements (or even entire function decls) as "commands".
"Command" is a better term for what we call "statements" in imperative programming languages. "Statement" in this context is an unfortunate historical term; except in Prolog, these "statements" don't have a truth-value, just an effect. (And in Prolog we call them "clauses" instead.)
True.
In many early computer programming documents the term "order" was used instead of "statement", where "order" was meant as a synonym for "command" and not as referring to the ordering of a sequence.
Occasionally, but much more often (as in Mauchly's cited paper) an "order" was a machine instruction, not a high-level language "statement".
Yes, but that is mostly because in the first few years (including by the time of Mauchly), there were no "high-level" programming languages, so the "orders" composing the text of a program corresponded to instructions directly executable by the machine.
I believe that the term "statement" has been imposed by the IBM publications about FORTRAN, starting in 1956.
Before the first public documents about IBM FORTRAN, the first internal document about FORTRAN, from 1954, had used the terms "formula" for anything that later would be called "executable statement", i.e. for many things that would not have been called formulas either before or after that, like IF-formulas, DO-formulas, GOTO-formulas and so on, and the document had used "sentence" for what later would be called "non-executable statements" (i.e. definitions or declarations).
Before FORTRAN (1951 to 1953), for his high-level programming language Heinz Rutishauser had used the term "Befehl", which means "command". (For what we name today "program", he had used the term "Rechenplan", which means "computation plan".)
I suppose if your only tool is a formula translator, everything looks like a formula?
"Plan" is really a much better term than "program". My next compiler will be called "plantran" for "plan translator". "pt" for short.
I have named the experimental languages I am toying with "Plan <x>" (for Programming LANguage, but also because it is a good term as you pointed out). Originally I was using numbers (probably would skip "Plan 9" like Microsoft skipped Windows 9) but the experiments went in different directions and implying an order was misleading. So I switched to star names: Plan Sirius, Plan Rigel, Plan Vega...
Are you writing up your results anywhere?
Is it? "Statement", defined by the dictionary as "the expression of an idea or opinion through something other than words.", seems quite apt. Symbols may end up resembling words, which perhaps is your contention, but technically they are only symbols.
Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
> Best I can tell, all usable definitions surrounding "Command" seem to suggest an associated action, which isn't true of all statements in imperative programming.
The defining characteristic of a programming "statement" is that it can perform some action (even if not all of them do), whereas statements in the usual everyday sense are inert. So it's not a good term.
> The defining characteristic of a programming "statement" is that it can perform some action
Given a declaration "statement" such as:
What is the expected action? Memory allocation... I guess? Does any compiler implementation actually do that? I believe, in the age of type systems being all the rage, one would simply see that as a type constraint, not as a command to act upon.Expressing an idea (the values used henceforth should be integers) seems more applicable, no?
> Given a declaration "statement" such as:
> int x;
> What is the expected action?
In a language that requires you to declare variables before you use them, it clearly does something - you couldn't do "x = 5;" before, and now you can. If you're trying to understand the program operationally (and if you're not then why are you thinking about statements?) it means something like "create a variable called x", even if the implementation of that is a no-op at the machine code level.
> I believe, in the age of type systems being all the rage, one would simply see that as a type constraint, not as a command to act upon.
But in that case by definition one would not be seeing it as a statement! Honestly to me it doesn't really matter whether "int x;" is an expression, a statement, or some mysterious third thing (in the same way that e.g. forward declaring a function isn't a statement or an expression). When we're talking about the distinction between statements and expressions we're talking primarily about statements like "x = x + 1;", which really can't be understood as a statement in the everyday English sense.
> Memory allocation... I guess? Does any compiler implementation actually do that?
Toy/research or embedded compilers do yes.
That looks like C. In C, declarations aren't statements: https://en.cppreference.com/w/c/language/statements.html
It's Hnlang, where declarations are statements. But perhaps the earlier comment was about C specifically? I admittedly missed it, if so.
The requirement for a truth value is just from how math/logic uses the term.
Well, and linguistics. A "statement" in the grammatical sense is a sentence that is declarative in form (as opposed to, in English, interrogative, imperative, or exclamatory) and which thus ostensibly has a truth-value.
On an old Nokia you follow links by pressing the call button.
Now that it's commonly "tap or click the button" I might be down with the next gen using "call". Anything, as long as they don't go with "broh, the button".
Not a scientific theory, but an observation. New words propagate when they "click". They are often short, and for one reason or another enable people to form mental connections and remember what they mean. They spread rapidly between people like a virus. Sometimes they need to be explained, sometimes people get it from context, but afterward people tend to remember them and use them with others, further propagating the word.
A fairly recent example, "salty". It's short, and kinda feels like it describes what it means (salty -> tears -> upset).
It sounds like "call" is similar. It's short, so easy to say for an often used technical term, and there are a couple of ways it can "feel right": calling up, calling in, summoning, invoking (as a magic spell). People hear it, it fits, and the term spreads. I doubt there were be many competing terms, because terms like "jump" would have been in use to refer to existing concepts. Also keep in mind that telephones were hot, magical technology that would have become widespread around this same time period. The idea of being able to call up someone would be at the forefront of people's brains, so contemporary programmers would likely have easily formed a mental connection/analogy between calling people and calling subroutines.
Side-note: for me, at least, "salty" isn't anything to do with tears; in my idiolect when someone's "salty" it doesn't mean they're sad, it means they're angry or offended or something along those lines. The metaphor is more about how salt (in large quantities) tastes strong and sharp.
(Which maybe illustrates that a metaphor can succeed even when everyone doesn't agree about just what it's referring to, as you're suggesting "call" may have done.)
This is actually a great example. I think for a lot of these words, everyone has a different interpretation, but somehow it ends up working for everyone. It's not important that everyone remembers them the same way, but that everyone feel like the word fits even they have different independent interpretations.
'Salty' in that context means 'bitter', ironically.
I would say “resentful”, “disgruntled”, “aggrieved”. “Bitter” feels like a more longer-lasting, somewhat less emotional condition to me.
And I agree that it has nothing to do with tears. The actual etymology stems from sailors: https://www.planoly.com/glossary/salty
True enough, there are endless subtleties to language (English in particular) that make words simultaneously vague and extremely specific.
Wiktionary defines bitter as "cynical and resentful", which doesn't quite capture the "more longer-lasting, somewhat less emotional condition" part of it.
> ... but those of any complexity presumably ought to be in a library — that is, a set of magnetic tapes in which previously coded problems of permanent value are stored.
Oddly, I never thought of the term library as originating from a physical labelled and organized shelf of tapes, until now.
At https://youtu.be/DjhRRj6WYcs?t=338 you can see EDSAC's original linker, Margaret Hartrey, taking a library subroutine from the drawer of paper tapes. (But you should really watch the whole thing, of course!)
I've never heard of a library being called anything else - look at the common file extension .lib, for example.
I don't see .lib being all that common, but it might just be what I'm used to. `.so` or `.dll` or such sure (though to be fair, the latter does include the word library.)
.lib is the traditional extension for static libraries and import libraries on Windows. Every .dll has an accompanying .lib. (Msys2 uses their own extensions, namely .a for static libraries and .dll.a for import libraries.)
It's not _that_ they are called libraries, but _why_ they are called libraries. I had assumed, like many others that it was purely by analogy (ie, a desktop), and not that the term originated with a physical library of tapes.
There is also the phrase in music, "call and response" - even referencing a return value.
Off topic somewhat but where the hell did the verb 'jump' come from for video calls? I'm always being asked to jump on a call
I assume it's because you're doing something you'd rather not do to benefit somebody else, much like you'd jump on a grenade :p
I'll henceforth take this as the canonical explanation :)
I thought it's like hopping onto a bus. Then it doesn't take a lot for "hop" to change into "jump".
You would jump on a call before video was involved. Not even necessarily a conference call either, you could jump on the horn etc.
It just means to start doing something, no great mystery.
I always thought it was just a metaphor for suddenly leaving whatever you were doing at the time. You’re doing something else, and then you jump on a call. You don’t mosey on over and show up on the call fifteen minutes later.
That's just a standard meaning of the word jump. You're jumping from whatever you were doing to a video call.
It seems like it has the connotation of being spontaneous and not requiring preparation.
This was a result of Zoom's acquisition of the band House of Pain.
Thought it was Kris Kross.
When in reality it was Van Halen.
Maybe it's a hint that the operation is irreversible! :-)
That has nothing to do with the video aspect, but the group aspect. "Jump", "Hop" and the like are making a group call analogous to a bus ride, where people can jump on and off.
> ... Why do we "call" functions?
I always thought that the functions did not need a call keyword, as they normally would return a value, so that functions would appear in an assignment. So one just uses the function.
What needed a CALL was a subroutine, which effectively was a named address/label.
Indeed it would be just as possible to GOTO the address/label and then GOTO back. CALL keyword made the whole transaction more comprehensive.
So in a sense it was similar to calling up someplace using the address number. Often times this would change some shared state so that the caller would then proceed after the call. Think of it as if a 'boss' first calls Sam to calculate the figures, then calls Bill to nicely print the TPS report.
Eventually everything became a function and subroutines were associated with spaghetti...
Now, why is that it's called routine (aka program) and subroutine?
> ...why is that it's called "routine”
Well, apparently [0], in a 1947 document "Planning and Coding Problems for an Electronic Computing Instrument, Part 1" by H. Goldstine and J. von Neumann it is stated:
[0]: https://retrocomputing.stackexchange.com/q/20335Algol 60 also uses the word "call" for parameters as well as functions. It introduced (?) the terms "call by value" and "call by name". For example, in 4.7.5.3: "In addition if the formal parameter is called by value the local array created during the call will have the same. subscript bounds as the actual array."
In modern terminology, we call procedures/functions/subroutines and pass arguments/parameters, so "pass by (value|name|reference)" is clearer than "call by (value|name|reference)". But the old terms "call by value" et al have survived in some contexts, though the idea of "calling" an argument or parameter has not.
I've always thought it odd that the one thing the caller and callee need to agree on are called arguments.
Another interesting one is in games when an effect is said to "proc", which I guess is from a procedure getting called.
Basically, yeah. It actually has its origin in MUDs, from 'spec_proc', short for special procedure.
My friends always seemed to think this one comes from "procure".
It's interesting to think of "calling" as "summoning" functions. We could also reasonably say "instantiating", "evaluating", "computing", "running", "performing" (as in COBOL), or simply "doing".
In Mauchly's "Preparation of Problems for EDVAC-Type Machines", quoted in part in the blog post, he writes:
> The total number of operations for which instructions must be provided will usually be exceedingly large, so that the instruction sequence would be far in excess of the internal memory capacity. However, such an instruction sequence is never a random sequence, and can usually be synthesized from subsequences which frequently recur.
> By providing the necessary subsequences, which may be utilized as often as desired, together with a master sequence directing the use of these subsequences, compact and easily set up instructions for very complex problems can be achieved.
The verbs he uses here for subroutine calls are "utilize" and "direct". Later in the paper he uses the term "subroutine" rather than "subsequence", and does say "called for" but not in reference to the subroutine invocation operation in the machine:
> For these, magnetic tapes containing the series of orders required for the operation can be prepared once and be made available for use when called for in a particular problem. In order that such subroutines, as they can well be called, be truly general, the machine must be endowed with the ability to modify instructions, such as placing specific quantities into general subroutines. Thus is created a new set of operations which might be said to form a calculus of instructions.
Of course nowadays we do not pass arguments to subroutines by modifying their code, but index registers had not yet been invented, so every memory address referenced had to be contained in the instructions that referenced it. (This was considered one of the great benefits of keeping the program in the data memory!)
A little lower down he says "initiate subroutines" and "transferring control to a subroutine", and talks about linking in subroutines from a "library", as quoted in the post.
He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines. He does talk about mathematical functions being computed by subroutines, including things like matrix multiplication:
> If the subroutine is merely to calculate a function for a single argument, (...)
"He never calls subroutines "functions"; I'm not sure where that usage comes from, but certainly by BASIC and LISP there were "functions" that were at least implemented by subroutines."
I think the early BASIC's used the subroutine nomenclature for GOSUB, where there was no parameter passing or anything, just a jump that automatically remembered the place to return.
Functions in BASIC, as I remember it, were something quite different. I think they were merely named abbreviations for arithmetic expressions and simple one line artithmetic expressions only. They were more similar to very primitive and heavily restricted macros than to subroutines or functions.
Right, that's what Algol-58 functions were, too. I think FORTRAN also has a construct like this, but I forget.
FORTRAN had both functions and subroutines. A function returned a value and was invoked in an expression (eg. S=SIN(A)). A subroutine was invoked by calling it (eg. CALL FOPEN(FNAME, PERMS)).
I should probably just Google this, but how did you define the functions?
FORTRAN also had single-expression function definitions, e.g.
Naturally this is syntactically identical to an array element assignment, which is one of the many things that made compiling FORTRAN so much fun.Yeah, that's also almost exactly the same as the Algol-58 syntax for defining such functions. And BASIC, except you had to say
and the function name had to start with FN.s/had/has/
In the flang-new compiler, which builds a parse tree for the whole source file before processing any declarations, it was necessary to parse such things as statement functions initially so that further specification statements could follow them. Later, if it turns out that the function name is an array or regular function returning a pointer, the parse tree gets patched up in place and the statement becomes the first executable statement.
Yes, sorry, I didn't mean to imply that Fortran doesn't exist any more. I compiled Fortran on my cellphone as recently as last year.
For those who like that sort of thing, Fortran is exactly the sort of thing that they like.
Another use of call is in barn / folk / country dancing where a caller will call out the moves. “Swing your partner!”, “Dosey do!”, “Up and down the middle!” etc. Each of these calls describes a different algorithmic step. However, it’s unlikely this etymology has anything to do with function calling: each call modifies the global state of the dancers with no values returned to the caller.
If the librarian's 'call for' meaning was indeed the one originally intended, then even in Mauchly's 1947 article you can already see slippage towards the more object-oriented or actor-oriented 'call to' meaning.
That’s a big “if” when the only support for that theory seems to appear a decade later than the “call in” meaning.
To be honest I'm not sure what you're saying here, or what you think I'm saying.
You wrote "If the librarian's 'call for' meaning was indeed the one originally intended"
I doubt that "the librarian's 'call for' meaning was indeed the one originally intended" (not that you say it!).
Or maybe there is no difference between what you mean by "call for" and what the first quote meant by "call in". The subroutine is called / retrieved - and control is transferred to that subroutine. (I wouldn't say that when doctors are called to see a patient, for example, they are called in the librarian's meaning.)
I love this sort of cs history. I’m also curious—why do we “throw” an error or “raise” an exception? Why did the for loop use “for” instead of, say, “loop”?
It's been ages, but I think an earlier edition of Stroustrup's The C++ Programming Language explains that he specifically chose "throw" and "catch" because more obvious choices like "signal" were already taken by established C programs and choosing "unusual" words (like "throw" and "catch") reduced chance of collision. (C interoperability was a pretty big selling point in the early days of C++.)
The design of exception handling in C++ was inspired by ML, which used 'raise', and C++ might itself have used that word, were it not already the name of a function in the C standard library (as was 'signal'). The words 'throw' and 'catch' were introduced by Maclisp, which is how Stroustrup came to know of them. As he told Guy Steele at the second ACM History of Programming Languages (HOPL) conference in 1993, 'I also think I knew whatever names had been used in just about any language with exceptions, and "throw" and "catch" just were the ones I liked best.'
I'm guessing "throw" came about after someone decided to "catch" errors.
As for "raise", maybe exceptions should've been called objections.
"Exception" comes from hardware/state machines, and the name reflects that they may not be error cases. For instance, if an embedded device is waiting for a button press, then pressing that button will put the MCU into an exceptional/special state (eg interrupting execution or waking up from sleep).
But surely those days are in the past. We should call them objections, because they're always used for errors and never for control flow. [1] After all, that would be an unconditional jump, and we've already settled that those are harmful.
[1] https://docs.python.org/3/library/exceptions.html#StopIterat...
I think “raise” comes from the fact that the exception propagates “upward” through the call stack, delegating the handling of it to the next level “up.” “Throw” may have to do with the idea of not knowing what to do/how to handle an error case, so you just throw it away (or throw your hands up in frustration xD). Totally just guessing
I suspect it comes from raising flags/signals (literally as one might run a flag up a flag pole?) to indicates CPU conditions, and then that terminology getting propagated from hw to sw.
… which could come from raising the voltage of a signal indicating a condition.
Sounds plausuble. Some of the earliest exception handling systems did not have any semantic difference between CPU exceptions and software exceptions.
You can still use SIGUSR1 and SIGUSR2 for it.
idk because there are some circles in which boolean variables are called flags but I've never seen them referred to as being raised or unraised/lowered, only set and unset
I would have thought it came from the concept of 'raising an issue' or even 'raising a stink'.
You throw something catchable and if you fail to catch it it’ll break. Unless it’s a steel ball.
You raise flags or issues, which are good descriptions of an exception.
That's a great question. The first language I learned was python, and "for i in range(10)" makes a lot of sense to me. But "for (int i = 0; i < 10; i++)" must have come first, and in that case "for" is a less obvious choice.
BASIC had the FOR-NEXT loop back in 1964.
10 FOR N = 1 TO 10
20 PRINT " ";
30 NEXT N
C language would first release in 1972, that had the three-part `for` with assignment, condition, and increment parts.
This reminds me of a little bit of trivia. In very old versions of BASIC, "FORD=STOP" would be parsed as "FOR D = S TO P".
I found that amusing circa 1975.
In Fortran, it is a do-loop :)
Fortran has grown a lot over time. If somebody said it don’t have a do loop in 196X, I wouldn’t be too surprised.
Really it’s just syntactic sugar, just use a goto.
FORTRAN IV, at least the version I used on the PDP-11 running RSX, did not have a DO-loop. Just IF and GO TO. But it did have both logical and arithmetic IF.
I don’t believe this.
The entire point of Fortran was being an effective optimizing compiler for DO loops.
FOR comes from ALGOL in which as far as I know is was spelled:
Algol 58 had "for i:=0(1)9". C's for loop is a more general variant.
"For" for loop statements fits with math jargon: "for every integer i in the set [1:20], ..."
“for” is short for “for each”, presumably. `for i in 1..=10` is short for “for each integer i in the range 1 to 10”.
Yeppers. For a few more details, one might start here: https://en.wikipedia.org/wiki/Universal_quantification
Also interesting to contrast this to invocation or application (e.g. to invoke or apply). I'm sure there are fair few 'functional dialects' out there!
Because, obviously, we stand out in a field of the code segment and shout the address we which to jump to or push onto the stack. ;)
That's easy. The hard part is what do we call functions.
I seem to remember people used to say "call it up" when asking an operator to perform a function on a computer when the result was displayed in front of the user.
Huh, I gosub my functions ...
GLENDOWER: I can call spirits from the vasty deep.
HOTSPUR: Why, so can I, or so can any man; But will they come when you do call for them?
-- Henry the Fourth, Part 1
Why did the wild “called” that dog in Jack London’s novel?
> Calling a function is like calling for a servant — a summoning to perform a task.
So we renamed our git branches from master to main...because of colonialism.
So what's the correct non-colonial word? ask, request, plea?
Some people here seem to like the word "summon". tsk, tsk, tsk
Because they're subroutines. And you call those. Often with a call instruction.
https://youtu.be/xrIjfIjssLE?t=205
A good time to link this classic
Also exists: „activate“ from activation record
reminder that the author is a convicted rapist. for more details skip to the relevant section of the following article: https://izzys.casa/2024/11/on-safe-cxx/
You “call upon” the function to perform a task, or return a value as the case may be. Just as you may call upon a servant or whatever.
The answer is in the article. You "call" functions because they are stored in libraries, like books, and like books in libraries, when you want them, you identify which one you want by specifying its "call number".
Another answer also in the article is that you call them like you call a doctor to your home, so they do something for you, or how people are “on call”.
The “call number” in that story comes after the “call”. Not the other way around.
I'd like to point at CALL, a CPU instruction, and its origins. I'm not familiar with this, but it could reveal more than programming languages. The instruction is present at least since first intel microprocessors and microcontrollers were designed.
I kept scrolling expecting to see this story:
> Dennis Ritchie encouraged modularity by telling all and sundry that function calls were really, really cheap in C. Everybody started writing small functions and modularizing. Years later we found out that function calls were still expensive on the PDP-11, and VAX code was often spending 50% of its time in the CALLS instruction. Dennis had lied to us! But it was too late; we were all hooked...
https://www.catb.org/~esr/writings/taoup/html/modularitychap...
The first Intel microcontrollers were the 8008 and the 4004, designed in 01971. This is 13 years after this post documents the "CALL X" statement of FORTRAN II in 01958, shortly after which (he documents) the terminology became ubiquitous. (FORTRAN I didn't have subroutines.)
In the Algol 58 report https://www.softwarepreservation.org/projects/ALGOL/report/A... we have "procedures" and "functions" as types of subroutines. About invoking procedures, it says:
> 9. Procedure statements
> A procedure statement serves to initiate (call for) the execution of a procedure, which is a closed and self-contained process with a fixed ordered set of input and output parameters, permanently defined by a procedure declaration. (cf. procedure declaration.)
Note that this does go to extra pains to include the term "call for", but does not use the phraseology "call a procedure". Rather, it calls for not the procedure itself, but the execution of the procedure.
However, it also uses the term "procedure call" to describe either the initiation or the execution of the procedure:
> The procedure declaration defining the called procedure contains, in its heading, a string of symbols identical in form to the procedure statement, and the formal parameters occupying input and output parameter positions there give complete information concerning the admissibility of parameters used in any procedure call, (...)
Algol 58 has a different structure for defining functions rather than procedures, but those too are invoked by a "function call"—but not by "calling the function".
I'm not sure when the first assembly language with a "call" instruction appeared, but it might even be earlier than 01958. The Burroughs 5000 seems like it would be a promising thing to look at. But certainly many assembly languages from the time didn't; even MIX used a STJ instruction to set the return address in the return instruction in the called subroutine and then just jumped to its entry point, and the PDP-10 used PUSHJ IIRC. The 360 used BALR, branch and link register, much like RISC-V's JALR today.
algol 60 implemention entailed debate over "call by value" and "call by name", though I can't say I know the when/where origin of those precise phrases, it seems a good place to look.
The instruction comes from programming languages.