> And sure, if you can express your intent clearly in English
I think it is underestimated how difficult this truly is.
And this will always remain uniquely human because only
The human truly knows their intent (sometimes).
I’ve had the AIs (ala the google) after I say “make me a script that does XYZ”, say here you go, and if I asked does it work and it tests it out will say yep it does, but only I will know if it is actually doing what I intended. I often will have to clarify my intent because I didn’t communicate well the first time. As we’ve all seen even amongst humans to each other, intent is not always well expressed.
There will always be a judgement made by a human with yes that is my intent or no it is not.
But even in old days of writing the “code” itself, most bugs were you not precisely saying what you wanted the program to do.
I think it’s correct to think of LLMs as compiling English to code, like c++ getting compiled to assembly.
The irony is that over the last decades we have come up with languages that try to remove the ambiguity. Some close to English, some not. The very specific "this is what I want you to do" languages. Almost like they are...describing the program you want to create. Might even wanna call them a programming language :D
When I was naive and young I dreamed about some day make a programming language that was just english…
The I learned about y2k, there was such a thing (more or less) from Apple. It implied knowing a strict subset of english and the correct words and constructs… it was a pain to program that (at least for me)
More or less at that time, I started understanding that programming languages limitations, although at the beginning a necessity, were a feature. Indeed it was already a very small subset of English, with very specific, succinct, small grammar, that was easy to learn (well, C++ stoped being learnable some years ago… but you get the point)
The idea of LLM eliminating good designed languages is hard for me to believe, just as stated in the article.
I think you are getting at the need for tiered layers of abstraction and constraint. Simultaneously considering all possible ways to solve a problem doesn't work for humans or the LLMs derived from our use of language. The repeated use of Domain Specific Languages (DSL) in the context of a general purpose programming language gets at this same need to constrain solution spaces within a reasonable boundary.
Once we have quantum LLMs, the need for intermediate abstraction layers might change, but that's very [insert magic here].
I’m kind of seeing a visual approach to programming as a programming language agnostic way of coding.
For that, I a big fan of flow based programming as the agnostic part. For the implementation, I’m thinking of Node-Red which is a visual implementation of flow based programming.
To become programming language agnostic, I’ve started on the Erlang-Red project which takes the frontend of Node Red and bolts it onto an Erlang backend.
Eventually the visual flow code will be programming languages independent.
I haven’t had luck with visual programming. I miss the easy to refactor things, copy/paste/move… in big projects for me at least tends to get messy. Anybody who has used labview for some years will probably agree.
I have dreamed about a programming language which would be basically text, but the editor would present it as a kind of flow chart. Maybe can be done with any existing programming language? But I found some troubles with language extensions… maybe someday someone much smarter than me can implement that in a meaningful way.
Node-RED offers good abstractions for organising visual code:
- subflows for grouping common code that can be reused
- link nodes for defining "gotos" visually and also making code reusable
- flow tabs that group a set of nodes in a single tab and where link nodes can be used to jump into these tabs
- node packages that define new nodes but also encapsulate NodeJS code into a single visual element, i.e. a node.
Having said that, many textual tools and ideas are still missing their visual equivalence
- how to do version control of visual code
- how to compare visual code between versions
- how to work collaboratively on the same code base
- what is refactoring in a visual program? moving and aligning is a form of refactoring that can lead to better understanding
- how to label visual elements in a consist manner, i.e., coding conventions - what is the equivalent to the 4 versus 2 spaces/tabs debate in visual programming?
But just as many questions remain unanswered in the AI/Vibe coding scene, so that doesn't mean visual programming isn't to taken seriously, it's just means its not trivial.
I think visual programming should be taken more seriously and thought through. I like to say that we went from punchcards to keyboards and somehow we stopped there - when it comes to programming. At the same time we went from phones with operators to dial phones to push button phones to smart phones with touch screens. Why not the same for programming?
What makes programming so inherently keyboard bound?
Not flow chart, but the closest in terms of UX improvement is live programming. Smalltalk and Lisp have that, as well as JS in the browser. Basically you can run any snippets you run and redefine almost everything. It's quite different workflow from the default Edit-Compile-Run.
That's one of the reason people love Emacs. Once you've loaded the software, you can rebuild it from the inside out without ever shutting it down. You think of a feature, you build it directly, piece by piece, instead of creating a new project, etc,...
COBOL, FORTRAN, ALGOL, have been around since the 60s. ADA and Pascal since the 70s.
Really it's just the B-derived languages (C, C++, C#, Objective-C, D, Rust, Java, Go, etc) that aren't executable pseudo-code. And they weren't even mainstream originally, with Pascal and its ilk being the preferred syntax until the mid-to-late 80s.
I really do think computing took a huge step backwards when C became the default on home computers.
LLMs talk natural languages. They are fundamentally ambiguous (that's a feature).
Programming is done with programming languages, which are fundamentally non-ambiguous (that's a necessity).
Now, software in general is unfortunately pretty bad and full of bugs, so one could argue that LLMs may get to a point where they are not worse than bad software. But for anything important, we will always need a non-ambiguous language.
> if LLMs can translate programming languages seamlessly and accurately, then
If you want to accurately translate programming languages, you need to look into compiler technology. LLMs aren't that.
While I'm unsure about the efficacy of LLMs, I do yearn for language tooling that lets you 'Bring Your Own Syntax'. I'm someone who prefers TypeScript, Java, and Zig's syntax and genuinely, genuinely struggles with Go, Crystal, Kotlin's syntax. Whoever came up with := versus = needs to stub their toe at least once a day for the rest of time. But if I could write code for Go using a different syntax, I'd write way more Go code. I feel like that's what petlangs like Borgo (https://github.com/borgo-lang/borgo) and AGL (https://github.com/alaingilbert/agl) are doing: making Go less goey.
Natural language is vague enough that I find voice assistants frustratingly difficult to use. I just want one with a documented voice protocol that I can use to quickly and succinctly give commands.
Even humans can't use natural language do give succinct commands, hence the use of prescribed verbage in air traffic control communication.
> There’s a classic joke that my brother loves: a software engineer’s partner asks him to go to the store and get milk, and if there are eggs, bring twelve! The engineer comes back with twelve bottles of milk. When asked why, he says “they had eggs”.
Notably, a modern LLM wouldn't make this mistake.
It's not at all clear to me that LLMs are or will become better at translating Python → C than English → C. It makes sense in theory, because programming languages are precise and English is not. In practice, however, LLMs don't seem to have any problem interpreting natural language instructions. When LLMs make mistakes, they're usually logic errors, not the result of ambiguities in the English language.
(I am not talking about the case where you give the LLM a one-sentence description of an app and it fails to lay out every feature as you'd imagined it. Obviously, the LLM can't read your mind! However, writing detailed English is still easier than writing Python, and I don't really have issues with LLMs interpreting my instructions via Genie Logic.)
I would have found this post more convincing if the author could point to examples of an LLM misinterpreting imprecise English.
P.S. I broadly agree with the author that the claim "English will be the only programming language you’ll ever need" is probably wrong.
> In practice, however, LLMs don't seem to have any problem interpreting natural language instructions
I can think of a couple of reasons this may be the case.
1. There is a subset of English that you use unknowingly that has a socially accepted formal definition and so can be used as a substitute for programming language. LLMs have learned this definition. Straying from this subset or expecting a different formal definition will result in errors.
2. The level of detail in your English description is such that ambiguity genuinely does not arise. Unlikely, you would not consider that "natural language".
3. English is not ambiguous when describing program features, and formal definitions can be skipped. Unlikely, because the entire product owner role is built on the frequently exclaimed "that's not what I meant!".
I think its #1, and I think that makes the most sense: through massive statistical data LLMs have learned which natural language instructions cause which modifications in codebases, for a giant amount of generic problems that it has training data on.
The moment you do something new though, all bets are off.
Yeah, the example with the eggs isn't great because an LLM would indeed get the correct interpretation but the thing is, this is based on LLMs having been trained on the context.
When and LLM has the context, it is usually able to correctly fill the gaps of vague English specifications.
But if you are operating at the bleeding edge of innovation or in depths of industry expertise that LLMs didn't train on, it won't be in a position to fill those blanks correctly.
And domains with less training data openly available are areas where innovation and differentiation and business moats live.
Oftentimes, only programming languages are precise enough to specify this type of knowledge.
I suspect this runs into the blub paradox somewhat[0]. The purpose of language is to teach you think, and so what might be terse and idiomatic in one language might be so diffuse and convoluted in another that it might be inscrutable.
To put it another way by mutating a well-known phrase, you might go from "there's obviously a bug" to "no obvious bugs".
It's like trying to find the flaw in a mathematical proof where you personally are lacking in a concept to have the clarity.
So why shouldn't your editor/IDE be aware of your mental model, and present the world to you in a language tailored specifically to your level of abstraction (at that moment). A pseudocode idiolang that might be a blend of concepts from Python, Go, Rust and Typescript as you need them.
And when you hit your limit in debugging a problem because it is too diffuse, you could ask the IDE to teach you the new concepts you need to view the code at a higher level of abstraction. You could imagine the UI presenting the same file side by side, with metaclassing on the one side and the alternative on the other, so you can drill into where the bugs might be hiding.
if AI can translate our English descriptions into working code, do we still need programming languages at all?
I think some people equate “source code” with “compiled code” or “AST” (abstract syntax tree). The former contain so many features that are still part of the english language such as functions / variables / types names, source files organization folder and filenames, comments, assets, git repo with log and history etc. And the AI probably wouldnt be as efficient if all those elements were not part of the training data. To get rid of such programming language and have a pure AI programming language would require tons of training data that humans will never produce (chicken and egg paradox)
IF LLM do translate so good English somehow directly in “code”, then there are much more job in the line that just coders, and many tools that could be rendered obsolete. Indeed fact the whole chain, from customer to, well, back to customer, could be replaced. Requirements elicitation, writing requirements, making a design, architecture, even tests (shloud we need them?)
As far as I know and my experience confirms (maybe biased?) the whole chain of SW engineering is there precisely because English is not always optimal.
Indeed fact fact in a project I directed, the whole requirement management was basically a loop
Repeat{talk to customer; write formal language; simulate; validate}until no change;
It was called “runnable specification” not my idea. It worked absolutely incredibly good.
The written requirement were written in C, C++, python, Verilog, Systems C. So they could be run. The runs were sent to the customer who would validate (approve) or reject.
That's pretty much how everyone is doing it, no ? It's a (Think a new improved model, Code the model, Study Results and compare with what we thought about the model) cycle. What's the alternative ?
I think the issue is how people goes about doing it. Often the model is clearly not improved, but it would mask problems some people wants under the rug. Sometimes, people are coding the wrong model or coding wrongly the correct model or coding wrongly the wrong model. Sometimes there's no study being done, only hope and prayers. And comparing is only ever done by the customer.
I'm not sure, I don't see this being a likely future. AI is currently a 90% solution. This future requires 100%. Once we have that a lot of new possibilities will emerge which might make live formal language translation less interesting.
The gap between "AI is a 90% solution" and "100% required for production" is enormous.
In my bubble, AI-generated code is maybe 70% useful, often less. The remaining 30% isn't minor polish—it's:
Understanding system architecture constraints
Handling edge cases AI doesn't know exist
Debugging when AI-generated code breaks in production
Knowing when AI's "solution" creates more problems than it solves
That last 30% is what separates engineers from prompt writers. And it's not getting smaller—if anything, it's growing as systems get more complex.
The things is there's multiple computation models, and while they are equivalent, there's a fairly involved computation needed to move from one model to the next. Then you got a lot of patterns of abstractions and best practices (best known as paradigm) that are built on top of those models to get today's programming languages.
So something like python is a fairly specialized language. Most of its concepts are not that easy to translate to another language which may involved another set of specialized paradigms.
You will need to revert to a common base, which basically means unravel what gives Python its identity, then rebuild according to the other programming language identity. And there's a lot of human choices in there which will be the most difficult to replicate. The idiomatic way of programming is a subset of what is possible in the language just to enable faster reading between human developers.
So there's no language agnostic programming as there's no agnostic computation models. It's kinda how there's no agnostic hardware architecture. There's a lot of fairly involved work to have cross-platform programs. But that can work as the common platform is very low-level itself (JVM and other runtimes)
That was my first thought coming from SPA development. Like, is there even a meaningful translation between rendering logic written in a functional, declarative style to e.g. object-oriented imperative Java? How many LOCs of C would be required to model a simple DOM operation?
Yes, everything is Turing complete and a translation can exist, but how would you make any sense of it as a reader?
As another commenter have put it [0], the need for specialized paradigms is to restrict what you can do and data type available to you, because it's easier to think and act when things are specialized and distinct.
But in daily life, people are not accustomed to formalize their thought at that extent as there's a collective substrate (known as culture and jargon) that does the job for natural languages.
But the wish described in TFA comes from a very naive place. Even natural languages can't be reduced to a single set.
I find it strange when programmers push the narrative that "we won't need to code anymore, just write in English."
If that's true, what's your value? You don't understand client needs better than a product manager. You don't have an exceptional product vision. You're essentially making yourself obsolete.
Your expertise currently lies in building systems, handling edge cases, optimizing performance, and avoiding technical debt. If that can be expressed in English prompts, anyone can do your job—PMs, analysts, business people.
A programmer who can't write code is just someone with ideas. There are millions of those, and they're worth $0.
Programmers who cheerlead the idea that "90% of code will be AI-written" are digging their own graves. In 5 years, they won't be replaced by AI—they'll be replaced by people who can both code AND use AI effectively.
>The AI handles the translation between the precise underlying code and the various language interfaces, ensuring that the semantics remain consistent across all views
This is not something AI will ever be good at. Simply, because it is also hard for humans to do.
Translating between programming languages is a very hard problem, because someone needs to fully understand both languages. Both humans and AI have trouble with it for the same reason and only monumental AI progress, which would have other implications, could change this.
Something as basic as addition varies wildly between languages, if you look at the details. And when it comes to understanding the details are exactly what matters.
I'm basically right here, but I don't want to be debugging anything at all. Some things can't be expressed properly in every language. I think a bunch of languages (sadly the ones that LLMs are best at right now) just need to be abandoned completely. Every best practice that we had to pick up in order to parallelize things after Moore ended has to be universal and embedded into the language, and everything else has to be binned. Especially seeing as those practices made everything more modular and maintainable, and we were able to slip into microservices and serverless fairly effortlessly (and skulk back the same way.)
I think we need languages optimized for isolation, without global anything and uncompilable without safety; and for readability. We need LLM oriented languages, meant to be read and not written. Like the author I think they'll look a lot more like Rust than anything else.
We should be programming them in structured natural language that expresses architecture, rather than details. Instead of application code, we also should be generating absurdly detailed and comprehensible test suites with that language, and ignoring the final implementation completely. The detailed architecture document, consisting of heavy commentary generated by the user (but organized and edited for consistency by the LLM in dialog with the user), and the test suite, should be the final product. Dropping it into any LLM should generate an almost identical implementation. That way, the language(s) can develop freely, and in a way oriented towards model usage, rather than having to follow humans who have to be retrained and reoriented after every change.
So maybe LLM-agnostic programming is what I'm asking for? I want LLM interactions to focus on making my intentions obvious, and clarifying things to whatever degree is necessary so it never has to really think about anything when generating the final product. I want the LLMs to use me as a context-builder. They can do the programming. Incidentally, this will obviously still take programmers because we know what is possible and what is not; like a driver feels their car as an extension of their body, although they're communicating with it through a wheel, three pedals, and a stick.*
Right now, LLMs are asking me what I want them to do too much. I want to tell them what I want them to do, and to have them probe the details of that until there's no place for them to make a mistake. A "programmer" will be the one who sets the program.
[*] Imagine the alternative (it's easy) of a autonomous car that says "Do you want to go to the grocery store? Or maybe visit your mother?" Stay out of my business, car. I have an organizer for that. I'll tell you where I want to go.
> No matter how good AI gets at generating code and even at debugging it, we’ll still need to understand what that code actually does when it doesn’t work as expected. And for that, we need programming languages. Not necessarily for writing the initial code, but for reading, tracing, and reasoning about it when things go wrong.
I'm not sure. Imagine that each CPU instruction or group of instructions is mapped to a midi sound and that you slowdown the stream of beeps enough that you can hear the "song" of the program. I wonder if you wouldn't be able to start hearing error states and distinguishing when they happened.
Meaning that I think we do need some way to debug, but I'm not sure it has to be text / programming languages, and if it's an AI doing it text also doesn't seem like the most efficient way to do it, information density wise.
I'm not sure why that is even a question? Should mathematics go back to Fermat style natural language?
“Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duas ejusdem nominis fas est dividere: cujus rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.”
Code and math notations help you think. Notations aren't just for the computer.
> And sure, if you can express your intent clearly in English
I think it is underestimated how difficult this truly is.
And this will always remain uniquely human because only The human truly knows their intent (sometimes).
I’ve had the AIs (ala the google) after I say “make me a script that does XYZ”, say here you go, and if I asked does it work and it tests it out will say yep it does, but only I will know if it is actually doing what I intended. I often will have to clarify my intent because I didn’t communicate well the first time. As we’ve all seen even amongst humans to each other, intent is not always well expressed.
There will always be a judgement made by a human with yes that is my intent or no it is not.
But even in old days of writing the “code” itself, most bugs were you not precisely saying what you wanted the program to do.
I think it’s correct to think of LLMs as compiling English to code, like c++ getting compiled to assembly.
The irony is that over the last decades we have come up with languages that try to remove the ambiguity. Some close to English, some not. The very specific "this is what I want you to do" languages. Almost like they are...describing the program you want to create. Might even wanna call them a programming language :D
Go has 25 keywords. C has 32. Famously verbose Visual Basic has over 200.
English has 1-2 million.
Anybody who has worked in requirements management in any meaningful capacity understands this…
When I was naive and young I dreamed about some day make a programming language that was just english…
The I learned about y2k, there was such a thing (more or less) from Apple. It implied knowing a strict subset of english and the correct words and constructs… it was a pain to program that (at least for me)
More or less at that time, I started understanding that programming languages limitations, although at the beginning a necessity, were a feature. Indeed it was already a very small subset of English, with very specific, succinct, small grammar, that was easy to learn (well, C++ stoped being learnable some years ago… but you get the point)
The idea of LLM eliminating good designed languages is hard for me to believe, just as stated in the article.
I think you are getting at the need for tiered layers of abstraction and constraint. Simultaneously considering all possible ways to solve a problem doesn't work for humans or the LLMs derived from our use of language. The repeated use of Domain Specific Languages (DSL) in the context of a general purpose programming language gets at this same need to constrain solution spaces within a reasonable boundary.
Once we have quantum LLMs, the need for intermediate abstraction layers might change, but that's very [insert magic here].
I’m kind of seeing a visual approach to programming as a programming language agnostic way of coding.
For that, I a big fan of flow based programming as the agnostic part. For the implementation, I’m thinking of Node-Red which is a visual implementation of flow based programming.
To become programming language agnostic, I’ve started on the Erlang-Red project which takes the frontend of Node Red and bolts it onto an Erlang backend.
Eventually the visual flow code will be programming languages independent.
I haven’t had luck with visual programming. I miss the easy to refactor things, copy/paste/move… in big projects for me at least tends to get messy. Anybody who has used labview for some years will probably agree.
I have dreamed about a programming language which would be basically text, but the editor would present it as a kind of flow chart. Maybe can be done with any existing programming language? But I found some troubles with language extensions… maybe someday someone much smarter than me can implement that in a meaningful way.
Node-RED offers good abstractions for organising visual code:
- subflows for grouping common code that can be reused
- link nodes for defining "gotos" visually and also making code reusable
- flow tabs that group a set of nodes in a single tab and where link nodes can be used to jump into these tabs
- node packages that define new nodes but also encapsulate NodeJS code into a single visual element, i.e. a node.
Having said that, many textual tools and ideas are still missing their visual equivalence
- how to do version control of visual code
- how to compare visual code between versions
- how to work collaboratively on the same code base
- what is refactoring in a visual program? moving and aligning is a form of refactoring that can lead to better understanding
- how to label visual elements in a consist manner, i.e., coding conventions - what is the equivalent to the 4 versus 2 spaces/tabs debate in visual programming?
But just as many questions remain unanswered in the AI/Vibe coding scene, so that doesn't mean visual programming isn't to taken seriously, it's just means its not trivial.
I think visual programming should be taken more seriously and thought through. I like to say that we went from punchcards to keyboards and somehow we stopped there - when it comes to programming. At the same time we went from phones with operators to dial phones to push button phones to smart phones with touch screens. Why not the same for programming?
What makes programming so inherently keyboard bound?
Not flow chart, but the closest in terms of UX improvement is live programming. Smalltalk and Lisp have that, as well as JS in the browser. Basically you can run any snippets you run and redefine almost everything. It's quite different workflow from the default Edit-Compile-Run.
That's one of the reason people love Emacs. Once you've loaded the software, you can rebuild it from the inside out without ever shutting it down. You think of a feature, you build it directly, piece by piece, instead of creating a new project, etc,...
Once upon a time, some people had the same dream.
And so, SQL was born, and is now used all across the globe to manage critical systems. Plain English that even a business person could understand.
Yes. But that is a limited DSL. For systems programming, may be more difficult.
COBOL, FORTRAN, ALGOL, have been around since the 60s. ADA and Pascal since the 70s.
Really it's just the B-derived languages (C, C++, C#, Objective-C, D, Rust, Java, Go, etc) that aren't executable pseudo-code. And they weren't even mainstream originally, with Pascal and its ilk being the preferred syntax until the mid-to-late 80s.
I really do think computing took a huge step backwards when C became the default on home computers.
COBOL was the same, wasn't it?
Yeah. In other words, finding a way to program people with Python would have a bigger payoff than programming computers with English.
LLMs talk natural languages. They are fundamentally ambiguous (that's a feature).
Programming is done with programming languages, which are fundamentally non-ambiguous (that's a necessity).
Now, software in general is unfortunately pretty bad and full of bugs, so one could argue that LLMs may get to a point where they are not worse than bad software. But for anything important, we will always need a non-ambiguous language.
> if LLMs can translate programming languages seamlessly and accurately, then
If you want to accurately translate programming languages, you need to look into compiler technology. LLMs aren't that.
While I'm unsure about the efficacy of LLMs, I do yearn for language tooling that lets you 'Bring Your Own Syntax'. I'm someone who prefers TypeScript, Java, and Zig's syntax and genuinely, genuinely struggles with Go, Crystal, Kotlin's syntax. Whoever came up with := versus = needs to stub their toe at least once a day for the rest of time. But if I could write code for Go using a different syntax, I'd write way more Go code. I feel like that's what petlangs like Borgo (https://github.com/borgo-lang/borgo) and AGL (https://github.com/alaingilbert/agl) are doing: making Go less goey.
Natural language is vague enough that I find voice assistants frustratingly difficult to use. I just want one with a documented voice protocol that I can use to quickly and succinctly give commands.
Even humans can't use natural language do give succinct commands, hence the use of prescribed verbage in air traffic control communication.
> There’s a classic joke that my brother loves: a software engineer’s partner asks him to go to the store and get milk, and if there are eggs, bring twelve! The engineer comes back with twelve bottles of milk. When asked why, he says “they had eggs”.
Notably, a modern LLM wouldn't make this mistake.
It's not at all clear to me that LLMs are or will become better at translating Python → C than English → C. It makes sense in theory, because programming languages are precise and English is not. In practice, however, LLMs don't seem to have any problem interpreting natural language instructions. When LLMs make mistakes, they're usually logic errors, not the result of ambiguities in the English language.
(I am not talking about the case where you give the LLM a one-sentence description of an app and it fails to lay out every feature as you'd imagined it. Obviously, the LLM can't read your mind! However, writing detailed English is still easier than writing Python, and I don't really have issues with LLMs interpreting my instructions via Genie Logic.)
I would have found this post more convincing if the author could point to examples of an LLM misinterpreting imprecise English.
P.S. I broadly agree with the author that the claim "English will be the only programming language you’ll ever need" is probably wrong.
> In practice, however, LLMs don't seem to have any problem interpreting natural language instructions
I can think of a couple of reasons this may be the case.
1. There is a subset of English that you use unknowingly that has a socially accepted formal definition and so can be used as a substitute for programming language. LLMs have learned this definition. Straying from this subset or expecting a different formal definition will result in errors.
2. The level of detail in your English description is such that ambiguity genuinely does not arise. Unlikely, you would not consider that "natural language".
3. English is not ambiguous when describing program features, and formal definitions can be skipped. Unlikely, because the entire product owner role is built on the frequently exclaimed "that's not what I meant!".
I think its #1, and I think that makes the most sense: through massive statistical data LLMs have learned which natural language instructions cause which modifications in codebases, for a giant amount of generic problems that it has training data on.
The moment you do something new though, all bets are off.
Yeah, the example with the eggs isn't great because an LLM would indeed get the correct interpretation but the thing is, this is based on LLMs having been trained on the context. When and LLM has the context, it is usually able to correctly fill the gaps of vague English specifications. But if you are operating at the bleeding edge of innovation or in depths of industry expertise that LLMs didn't train on, it won't be in a position to fill those blanks correctly.
And domains with less training data openly available are areas where innovation and differentiation and business moats live.
Oftentimes, only programming languages are precise enough to specify this type of knowledge.
English is often hopelessly vague. See how many definitions the word break has: https://www.merriam-webster.com/dictionary/break
And Solomonoff/Kolmogorov theories of knowledge say that programming languages are the ultimate way to specify knowledge.
I suspect this runs into the blub paradox somewhat[0]. The purpose of language is to teach you think, and so what might be terse and idiomatic in one language might be so diffuse and convoluted in another that it might be inscrutable.
To put it another way by mutating a well-known phrase, you might go from "there's obviously a bug" to "no obvious bugs".
It's like trying to find the flaw in a mathematical proof where you personally are lacking in a concept to have the clarity.
So why shouldn't your editor/IDE be aware of your mental model, and present the world to you in a language tailored specifically to your level of abstraction (at that moment). A pseudocode idiolang that might be a blend of concepts from Python, Go, Rust and Typescript as you need them.
And when you hit your limit in debugging a problem because it is too diffuse, you could ask the IDE to teach you the new concepts you need to view the code at a higher level of abstraction. You could imagine the UI presenting the same file side by side, with metaclassing on the one side and the alternative on the other, so you can drill into where the bugs might be hiding.
[0] https://paulgraham.com/avg.html
IF LLM do translate so good English somehow directly in “code”, then there are much more job in the line that just coders, and many tools that could be rendered obsolete. Indeed fact the whole chain, from customer to, well, back to customer, could be replaced. Requirements elicitation, writing requirements, making a design, architecture, even tests (shloud we need them?)
As far as I know and my experience confirms (maybe biased?) the whole chain of SW engineering is there precisely because English is not always optimal.
Indeed fact fact in a project I directed, the whole requirement management was basically a loop
Repeat{talk to customer; write formal language; simulate; validate}until no change;
It was called “runnable specification” not my idea. It worked absolutely incredibly good.
> Repeat{talk to customer; write formal language; simulate; validate}until no change;
That's basically the agile manifesto, but involving flexibility by having humans instead of policies defining the flow and shorter iterations.
What's the simulate and validate steps ?
The written requirement were written in C, C++, python, Verilog, Systems C. So they could be run. The runs were sent to the customer who would validate (approve) or reject.
That's pretty much how everyone is doing it, no ? It's a (Think a new improved model, Code the model, Study Results and compare with what we thought about the model) cycle. What's the alternative ?
I think the issue is how people goes about doing it. Often the model is clearly not improved, but it would mask problems some people wants under the rug. Sometimes, people are coding the wrong model or coding wrongly the correct model or coding wrongly the wrong model. Sometimes there's no study being done, only hope and prayers. And comparing is only ever done by the customer.
I'm not sure, I don't see this being a likely future. AI is currently a 90% solution. This future requires 100%. Once we have that a lot of new possibilities will emerge which might make live formal language translation less interesting.
The gap between "AI is a 90% solution" and "100% required for production" is enormous. In my bubble, AI-generated code is maybe 70% useful, often less. The remaining 30% isn't minor polish—it's:
Understanding system architecture constraints Handling edge cases AI doesn't know exist Debugging when AI-generated code breaks in production Knowing when AI's "solution" creates more problems than it solves
That last 30% is what separates engineers from prompt writers. And it's not getting smaller—if anything, it's growing as systems get more complex.
If this happens, that's great, but humans still need to understand any amount of code.
A few years ago in India, I saw a presentation where people were attempting to write programming in their mother tongue.
One such effort I found on GitHub is https://github.com/betacraft/rubyvernac-marathi (for Marathi, an Indian dialect).
The things is there's multiple computation models, and while they are equivalent, there's a fairly involved computation needed to move from one model to the next. Then you got a lot of patterns of abstractions and best practices (best known as paradigm) that are built on top of those models to get today's programming languages.
So something like python is a fairly specialized language. Most of its concepts are not that easy to translate to another language which may involved another set of specialized paradigms.
You will need to revert to a common base, which basically means unravel what gives Python its identity, then rebuild according to the other programming language identity. And there's a lot of human choices in there which will be the most difficult to replicate. The idiomatic way of programming is a subset of what is possible in the language just to enable faster reading between human developers.
So there's no language agnostic programming as there's no agnostic computation models. It's kinda how there's no agnostic hardware architecture. There's a lot of fairly involved work to have cross-platform programs. But that can work as the common platform is very low-level itself (JVM and other runtimes)
That was my first thought coming from SPA development. Like, is there even a meaningful translation between rendering logic written in a functional, declarative style to e.g. object-oriented imperative Java? How many LOCs of C would be required to model a simple DOM operation?
Yes, everything is Turing complete and a translation can exist, but how would you make any sense of it as a reader?
As another commenter have put it [0], the need for specialized paradigms is to restrict what you can do and data type available to you, because it's easier to think and act when things are specialized and distinct.
But in daily life, people are not accustomed to formalize their thought at that extent as there's a collective substrate (known as culture and jargon) that does the job for natural languages.
But the wish described in TFA comes from a very naive place. Even natural languages can't be reduced to a single set.
[0]: https://news.ycombinator.com/item?id=45482816
I occasionally wonder what the best standard would be for passing code (as opposed to data) between systems.
I keep coming back to System F or similar.
Please consent to 141 TCF vendors and 69 ad partners to read my blog post.
I don't think so
NoScript (and others) are your friends. I myself use ublock with JS off by default. I wasn't even aware the site have something like this.
I find it strange when programmers push the narrative that "we won't need to code anymore, just write in English."
If that's true, what's your value? You don't understand client needs better than a product manager. You don't have an exceptional product vision. You're essentially making yourself obsolete.
Your expertise currently lies in building systems, handling edge cases, optimizing performance, and avoiding technical debt. If that can be expressed in English prompts, anyone can do your job—PMs, analysts, business people.
A programmer who can't write code is just someone with ideas. There are millions of those, and they're worth $0. Programmers who cheerlead the idea that "90% of code will be AI-written" are digging their own graves. In 5 years, they won't be replaced by AI—they'll be replaced by people who can both code AND use AI effectively.
>The AI handles the translation between the precise underlying code and the various language interfaces, ensuring that the semantics remain consistent across all views
This is not something AI will ever be good at. Simply, because it is also hard for humans to do.
Translating between programming languages is a very hard problem, because someone needs to fully understand both languages. Both humans and AI have trouble with it for the same reason and only monumental AI progress, which would have other implications, could change this.
Something as basic as addition varies wildly between languages, if you look at the details. And when it comes to understanding the details are exactly what matters.
Imagine asking an AI to translate your Rust codebase into pseudocode so that you can debug a lifetime annotation issue.
I'm basically right here, but I don't want to be debugging anything at all. Some things can't be expressed properly in every language. I think a bunch of languages (sadly the ones that LLMs are best at right now) just need to be abandoned completely. Every best practice that we had to pick up in order to parallelize things after Moore ended has to be universal and embedded into the language, and everything else has to be binned. Especially seeing as those practices made everything more modular and maintainable, and we were able to slip into microservices and serverless fairly effortlessly (and skulk back the same way.)
I think we need languages optimized for isolation, without global anything and uncompilable without safety; and for readability. We need LLM oriented languages, meant to be read and not written. Like the author I think they'll look a lot more like Rust than anything else.
We should be programming them in structured natural language that expresses architecture, rather than details. Instead of application code, we also should be generating absurdly detailed and comprehensible test suites with that language, and ignoring the final implementation completely. The detailed architecture document, consisting of heavy commentary generated by the user (but organized and edited for consistency by the LLM in dialog with the user), and the test suite, should be the final product. Dropping it into any LLM should generate an almost identical implementation. That way, the language(s) can develop freely, and in a way oriented towards model usage, rather than having to follow humans who have to be retrained and reoriented after every change.
So maybe LLM-agnostic programming is what I'm asking for? I want LLM interactions to focus on making my intentions obvious, and clarifying things to whatever degree is necessary so it never has to really think about anything when generating the final product. I want the LLMs to use me as a context-builder. They can do the programming. Incidentally, this will obviously still take programmers because we know what is possible and what is not; like a driver feels their car as an extension of their body, although they're communicating with it through a wheel, three pedals, and a stick.*
Right now, LLMs are asking me what I want them to do too much. I want to tell them what I want them to do, and to have them probe the details of that until there's no place for them to make a mistake. A "programmer" will be the one who sets the program.
[*] Imagine the alternative (it's easy) of a autonomous car that says "Do you want to go to the grocery store? Or maybe visit your mother?" Stay out of my business, car. I have an organizer for that. I'll tell you where I want to go.
In all likelihood, LLMs will converge towards few hardware efficient PLs and will ignore all others.
Debugging a program will become like debugging your relationships - you argue until one side gives up or both are exhausted!
> No matter how good AI gets at generating code and even at debugging it, we’ll still need to understand what that code actually does when it doesn’t work as expected. And for that, we need programming languages. Not necessarily for writing the initial code, but for reading, tracing, and reasoning about it when things go wrong.
I'm not sure. Imagine that each CPU instruction or group of instructions is mapped to a midi sound and that you slowdown the stream of beeps enough that you can hear the "song" of the program. I wonder if you wouldn't be able to start hearing error states and distinguishing when they happened.
Meaning that I think we do need some way to debug, but I'm not sure it has to be text / programming languages, and if it's an AI doing it text also doesn't seem like the most efficient way to do it, information density wise.
I'm not sure why that is even a question? Should mathematics go back to Fermat style natural language?
“Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duas ejusdem nominis fas est dividere: cujus rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.”
Code and math notations help you think. Notations aren't just for the computer.