Maybe this will be a good way for users to communicate to the LLMs, but I wonder if it would be better for LLMs to generate code in a more disciplined language. Something like Eiffel with pre/post invariants or Lean/Rocq for provable correctness. Then a rigorous compiler can check the LLM emitted what it promised. The verbosity is unpleasant for conventional human programmers, but it's almost a non-issue for the LLM.
I agree. I feel like the more strictly defined the problem space can be and the more guarded rails that can be added and checked, the better the LLM will be. The data derived from such constructs also doubles as providing additional context to the LLM.
I haven’t had much time to really experiment with it but I feel that the typescript xstate library would in theory be a great fit for LLMs as it allows them to break a problem into states and state transitions and then focus on on one state at a time.
LLMs can only predict the next word, I don't know if it's an advantage to use a language with provable correctness or dependent types, especially when training data is scarce.
did anyone manage to cross the chasm between floats and reals doing proofs about algorithms in actual code? It might be a skill issue but I'm always stuck at reals not being decidably comparable
Isn't this more or less what every procedural programming language is? It's especially obvious with examples like Apple's Objective-C APIs ([object doSomethingAndReturnATypeWith:anotherObject]), Cobol (a IS GREATER THAN b), or SQL (SELECT field FROM table WHERE condition), but even assembly is a mnemonic English-ish abstraction for binary.
I'm intrigued by the idea, but my major concern would be that moving up to a new level of abstraction would even further obscure the program's logic and would make debugging especially difficult. There's no avoiding the fact that the code will need to be translated to procedural logic for the CPU to execute at some point. But that is not necessarily fatal to the project, and I am sure that assembly programmers felt the same way about Fortran and C, and Fortran and C programmers felt the same way about Java and Python, and so on.
Having worked across startups and international consulting companies, usually there is a direct correlation between org size and use of cheaper less skilled offshoring development teams.
This sounds to be in the same area as their post on the OCaml forum last year https://discuss.ocaml.org/t/a-next-generation-ide-for-ocaml/... , though it seems they are going for a natural language approach instead of something like AST editing (which is what I had in mind when reading their original outreach message).
I admit that I'm kind of intrigued. On the one hand, this is a standard vaporware concept that people have promised time and again almost since the dawn of programming. On the other hand, LLMs that write and maintain code currently have to do it using tools designed for humans, and that's probably leaving some amount of value on the table.
Right now I think I'd bet that whatever this is won't ultimately impress me much, but not at very extreme odds.
I find so ironic that for all the complaints COBOL has had troughout its history, now advocates for all programming languages spend all day typing whole books on a tiny chat window.
While I think the idea of LLM-based tooling and languages co-evolving is interesting, from this limited description, I think this isn't a helpful direction.
> “So instead of writing three applications, you write it in a special programming language, which is basically English, which describes how you want to see this application in a very specified way, and then AI agents, together with JetBrains tooling, will generate the code of all of these platforms,”
Is the process of generating code for each platform from the high-level specification deterministic, predictable, and obeying some natural invariants? Or is it a stochastic and unpredictable? If Alice publishes her open source project with the specification code, and Bob has access to a slightly different set of models at a later date, will Bob be able to reproduce the same generated artifacts that Alice did?
If you _can_ make everything deterministic and well-behaved, does it need to involve AI agents? Or can this effectively turn into an DSL which happens to be English-like, and a code-generation tool?
I'd always assumed that was what MPS was about, but I guess that's about developing more precise DSLs than having one single language to rule them all. But I'm sure that approach will work this time by just throwing AI and gigawatts worth of handwaving at it :-|
When I was young, I was fascinated by the paper "Language-Oriented Programming: The Next Programming Paradigm"[1] and was eager to see how the future would unfold. Decades later, I finally decided that perhaps just using Python—and making a good living—is fine by me.
Roc [1] is the language I think does the best trade-off among high-level instructions, design simplicity, performance, and pragmatism to handle real-world tasks.
It started as a purely functional language, but the authors are adding some escape hatches, such as imperative for loops, for cases where the functional approach is too complex (e.g., implementing quicksort). And it is pragmatic like Go when it comes to simplicity, compiling fast, and running fast.
AI programming and a compiler that catches bugs through good language design make a great match.
> So instead of writing three applications, you write it in a special programming language, which is basically English, which describes how you want to see this application in a very specified way, and then AI agents, together with JetBrains tooling, will generate the code of all of these platforms
i take it to mean no more kotlin multiplatform if ai will just generate platform specific code....?
My guess is that useful maintenance principles for human-authored code, such as avoiding reimplementing the same logic multiple times, will remain useful for LLMs if and when they take a greater role in maintenance. In which case Kotlin Multiplatform wouldn't be going anywhere.
Something I've been thinking about is language models aren't good at large software, but they're good at little demos. If you could build software by making little boxes, sort of like ActiveX controls back in the day, or microservices, and the model just focused on its own little microservice or object, and then you can build the software by connecting all of these.
"So-called "natural language" is wonderful for the purposes it was created for, such as to be rude in, to tell jokes in, to cheat or to make love in (and Theorists of Literary Criticism can even be content-free in it), but it is hopelessly inadequate when we have to deal unambiguously with situations of great intricacy, situations which unavoidably arise in such activities as legislation, arbitration, mathematics or programming." -Dijkstra
Yeah, I think that JetBrains might be able to do some really cool stuff with Object Pascal. I don't know if they have the chops to take on cross-platform UI like what Lazarus did with LCL, but it would be awesome.
Delphi is still around, and I really don't see what cool stuf that would be.
Other than the expectation JetBrains would make it cheaper than Embarcadero.
Well they could do a better Delphi with Kotlin Native, still waiting for it actually happen, instead of relying on Android and JVM, or trying to compete with React Native on iOS.
Prove that they actually don't need Java, as many Kotlin advocates parrot all the time.
>“So instead of writing three applications, you write it in a special programming language, which is basically English, which describes how you want to see this application in a very specified way, and then AI agents [...] will generate the code of all of these platforms"
This is probably just marketing gimmick with unique angle and born from aspirations to sell you subscription based SAAS/PAAS/coding_agent that is supposedly not like every other agent out there.
Also, I do not believe in magic carpets.
Also, I do not want even more higher-abstraction level that will come with even more layers of bloat.
Work on making Junie a viable competitor to Claude Code instead. While I love its accuracy, its equivalent to CC launching a sub agent per file, takes just as long, and probably burns tokens similarly - except there's no way to disable that approach in Junie.
I wish they would focus on providing a better IDE experience and fixing existing bugs (and preventing releasing bugs) and keeping up with supporting their existing tooling.
I have difficultly determining what this company wants to deliver. I know what I actually give them money for.
I have noticed that the quality went down hill a few updates after they introduced llm features. My guess is that they're dogfooding their llm features and suffering consequences where llms don't think about code as well as good programmers.
I share this sentiment - I often get the sense that they struggle to prioritize their development efforts appropriately.
It baffles me that they don’t offer a way of creating my own JetBrains IDE from the pile of features available across their IDEs and other tools. I pay for the “all products pack” and have started to question whether it’s worth it. Even if I am working in an IDE like PyCharm, it’s not much work to also work with a pre-compiled language in the same project. The distinction between the different IDEs grows less so year over year.
They have plugins for IntelliJ for almost all of the languages they have an IDE for, so you can do what you are asking for if I am understanding you correctly.
The only exception off the top of my head is .NET support, which isn’t available as an extension.
There are a bit too many lingering bugs in their product offerings, and they are branching out instead of cutting back and improving what's already there. VSCode is eating their lunch, and they seem to have focused to compete with the things VSCode does, for a while, and now on the AI race (like all companies).
About 80% of the code in our repos is language or framework baggage, delivers fuck all ROI and the LLM leverage is making about 25% of what is left go away, badly and without any determinism.
Whether or not this is the solution, I don't know, but it feels like the right direction.
Please don’t have it run in a virtual machine. Or at least the JVM. All of Jetbrains products would be 10x better if not written in Java. Native is best.
Native AOT compiled code is not necessarily faster than JIT compiled code. The Java JIT engine can do a lot of optimisations that would not be possible with AOT compilation. In the end it's compiled down to machine code anyway.
Javascript is a protoype based OOP language, and the DOM that any Web UI depends on is OOP.
All commercial browsers are written in a mix of JavaScript OOP and C++, nowadays they might have some tiny percentage of Rust, which might be using traits and dynamic dispatch anyway.
Kotlin runs on top of a OOP based VM, and uses a OOP class library. Compose has plenty of classes on its implementation and Android Studio interop.
SwiftUI infrastructure relies on OOP code written in Objective-C and macOS frameworks.
People are usually so eager to bash OOP that they overlook the details.
Besides that what is really the point you are making? That you can write software using Smalltalk or CLOS or any of their bastard offspring? Yes, you can and people did and still do. So what?
The point was all your examples are flawed and there is no mainstream UI that doesn't have OOP concepts powering their execution, either via the programming language, the standard library, or OS APIs being used across the stack.
That is another waterfall fantasy paired with Applescript/UML delusions. Perhaps they are seeking vibe investors or they'll indeed offer a toy IDE for vibe coders alongside with their real product.
Maybe this will be a good way for users to communicate to the LLMs, but I wonder if it would be better for LLMs to generate code in a more disciplined language. Something like Eiffel with pre/post invariants or Lean/Rocq for provable correctness. Then a rigorous compiler can check the LLM emitted what it promised. The verbosity is unpleasant for conventional human programmers, but it's almost a non-issue for the LLM.
I agree. I feel like the more strictly defined the problem space can be and the more guarded rails that can be added and checked, the better the LLM will be. The data derived from such constructs also doubles as providing additional context to the LLM.
I haven’t had much time to really experiment with it but I feel that the typescript xstate library would in theory be a great fit for LLMs as it allows them to break a problem into states and state transitions and then focus on on one state at a time.
LLMs can only predict the next word, I don't know if it's an advantage to use a language with provable correctness or dependent types, especially when training data is scarce.
did anyone manage to cross the chasm between floats and reals doing proofs about algorithms in actual code? It might be a skill issue but I'm always stuck at reals not being decidably comparable
That's a good question, and I certainly don't know the answer. Floats are a bit messy, but I would guess it's solvable.
They keep calling it a language (Kotlin derivative), but then the CTO refers to it as "basically English", with maybe "some semantics".
Are we just talking about prompting with some enforced structure, or is it a programming language?
Isn't this more or less what every procedural programming language is? It's especially obvious with examples like Apple's Objective-C APIs ([object doSomethingAndReturnATypeWith:anotherObject]), Cobol (a IS GREATER THAN b), or SQL (SELECT field FROM table WHERE condition), but even assembly is a mnemonic English-ish abstraction for binary.
I'm intrigued by the idea, but my major concern would be that moving up to a new level of abstraction would even further obscure the program's logic and would make debugging especially difficult. There's no avoiding the fact that the code will need to be translated to procedural logic for the CPU to execute at some point. But that is not necessarily fatal to the project, and I am sure that assembly programmers felt the same way about Fortran and C, and Fortran and C programmers felt the same way about Java and Python, and so on.
I remember working on a project (something like a java IDE) with jruby inside and a natural-langauge-like DSL for an end user.
I'd love it if jetbrains worked on getting intellij to not hog so much resource and become unresponsive working on huge projects instead.
They should use the LLM to do the refactoring for them.
Only half joking, though. Dogfooding is big at the place I work at and they should do it too.
It’s a huge org. Pretty sure they can chew gum and walk down the sidewalk at the same time.
having been a customer for more than a decade, I've noticed an inverse correlation between their org size and the quality of their products
Having worked across startups and international consulting companies, usually there is a direct correlation between org size and use of cheaper less skilled offshoring development teams.
Maybe they need to gradually refactor/replace components in something other than a JVM language. And not Electron (JS) either. ;o)
Is there much meaningful space left between Python (for example) and English (or other written language of your choice)?
LLMs are pretty code at compiling clear descriptive or prescriptive instructions in English down to a programming language.
This sounds to be in the same area as their post on the OCaml forum last year https://discuss.ocaml.org/t/a-next-generation-ide-for-ocaml/... , though it seems they are going for a natural language approach instead of something like AST editing (which is what I had in mind when reading their original outreach message).
I admit that I'm kind of intrigued. On the one hand, this is a standard vaporware concept that people have promised time and again almost since the dawn of programming. On the other hand, LLMs that write and maintain code currently have to do it using tools designed for humans, and that's probably leaving some amount of value on the table.
Right now I think I'd bet that whatever this is won't ultimately impress me much, but not at very extreme odds.
"A high-level language that's basically English."
COBOL, you're inventing COBOL.
I find so ironic that for all the complaints COBOL has had troughout its history, now advocates for all programming languages spend all day typing whole books on a tiny chat window.
(or Smalltalk or AppleScript or the pantomime "language" of Cucumber)
PHB ignores observation by Dilbert and approves $40M expenditure to reinvent the wheel anyhow.
Maybe SmallTalk?
While I think the idea of LLM-based tooling and languages co-evolving is interesting, from this limited description, I think this isn't a helpful direction.
> “So instead of writing three applications, you write it in a special programming language, which is basically English, which describes how you want to see this application in a very specified way, and then AI agents, together with JetBrains tooling, will generate the code of all of these platforms,”
Is the process of generating code for each platform from the high-level specification deterministic, predictable, and obeying some natural invariants? Or is it a stochastic and unpredictable? If Alice publishes her open source project with the specification code, and Bob has access to a slightly different set of models at a later date, will Bob be able to reproduce the same generated artifacts that Alice did?
If you _can_ make everything deterministic and well-behaved, does it need to involve AI agents? Or can this effectively turn into an DSL which happens to be English-like, and a code-generation tool?
I'd always assumed that was what MPS was about, but I guess that's about developing more precise DSLs than having one single language to rule them all. But I'm sure that approach will work this time by just throwing AI and gigawatts worth of handwaving at it :-|
When I was young, I was fascinated by the paper "Language-Oriented Programming: The Next Programming Paradigm"[1] and was eager to see how the future would unfold. Decades later, I finally decided that perhaps just using Python—and making a good living—is fine by me.
[1] https://resources.jetbrains.com/storage/products/mps/docs/La...
So we've come a full circle now. From AI Will Replace Programmers to programming the AI to program =)
And back round again when Scotty is talking into a mouse.
Roc [1] is the language I think does the best trade-off among high-level instructions, design simplicity, performance, and pragmatism to handle real-world tasks.
It started as a purely functional language, but the authors are adding some escape hatches, such as imperative for loops, for cases where the functional approach is too complex (e.g., implementing quicksort). And it is pragmatic like Go when it comes to simplicity, compiling fast, and running fast.
AI programming and a compiler that catches bugs through good language design make a great match.
[1] https://www.roc-lang.org/
My guess is that useful maintenance principles for human-authored code, such as avoiding reimplementing the same logic multiple times, will remain useful for LLMs if and when they take a greater role in maintenance. In which case Kotlin Multiplatform wouldn't be going anywhere.
Something I've been thinking about is language models aren't good at large software, but they're good at little demos. If you could build software by making little boxes, sort of like ActiveX controls back in the day, or microservices, and the model just focused on its own little microservice or object, and then you can build the software by connecting all of these.
The problem with this sort of approach is that the complexity moves to the interaction between microservices.
And in fact, software is made out of little interacting boxes already. They’re called “functions” and “classes.”
"So-called "natural language" is wonderful for the purposes it was created for, such as to be rude in, to tell jokes in, to cheat or to make love in (and Theorists of Literary Criticism can even be content-free in it), but it is hopelessly inadequate when we have to deal unambiguously with situations of great intricacy, situations which unavoidably arise in such activities as legislation, arbitration, mathematics or programming." -Dijkstra
JetBrains product quality has really gone downhill.
>> So basically, you write the design doc in English, maybe with some semantics, with some abstract paragraph, some other things which might help
Isn’t that literate programming or in similar vein?
They’re going to reinvent HyperTalk and AppleScript?
I was thinking about Delphi myself.
Yeah, I think that JetBrains might be able to do some really cool stuff with Object Pascal. I don't know if they have the chops to take on cross-platform UI like what Lazarus did with LCL, but it would be awesome.
Delphi is still around, and I really don't see what cool stuf that would be.
Other than the expectation JetBrains would make it cheaper than Embarcadero.
Well they could do a better Delphi with Kotlin Native, still waiting for it actually happen, instead of relying on Android and JVM, or trying to compete with React Native on iOS.
Prove that they actually don't need Java, as many Kotlin advocates parrot all the time.
Didn't realize English was a derivative of Kotlin. Wow, I achieved enlightenment!
Inform 7 ?
I wrote about why higher level abstractions may not work all the time.
https://simianwords.bearblog.dev/so-you-want-to-create-a-new...
TL;DR the overhead in learning the new abstraction and dealing with the inevitable edge cases (known or unknown) must be taken into account.
All abstractions sound good until you account for these things.
The new language can work if the edge cases of this language are known and minimal.
>“So instead of writing three applications, you write it in a special programming language, which is basically English, which describes how you want to see this application in a very specified way, and then AI agents [...] will generate the code of all of these platforms"
They said this was going to happen over 60 years ago, and the end result was COBOL.
So, vibe coding
This is probably just marketing gimmick with unique angle and born from aspirations to sell you subscription based SAAS/PAAS/coding_agent that is supposedly not like every other agent out there.
Also, I do not believe in magic carpets.
Also, I do not want even more higher-abstraction level that will come with even more layers of bloat.
Work on making Junie a viable competitor to Claude Code instead. While I love its accuracy, its equivalent to CC launching a sub agent per file, takes just as long, and probably burns tokens similarly - except there's no way to disable that approach in Junie.
I wish they would focus on providing a better IDE experience and fixing existing bugs (and preventing releasing bugs) and keeping up with supporting their existing tooling.
I have difficultly determining what this company wants to deliver. I know what I actually give them money for.
I have noticed that the quality went down hill a few updates after they introduced llm features. My guess is that they're dogfooding their llm features and suffering consequences where llms don't think about code as well as good programmers.
I share this sentiment - I often get the sense that they struggle to prioritize their development efforts appropriately.
It baffles me that they don’t offer a way of creating my own JetBrains IDE from the pile of features available across their IDEs and other tools. I pay for the “all products pack” and have started to question whether it’s worth it. Even if I am working in an IDE like PyCharm, it’s not much work to also work with a pre-compiled language in the same project. The distinction between the different IDEs grows less so year over year.
They have plugins for IntelliJ for almost all of the languages they have an IDE for, so you can do what you are asking for if I am understanding you correctly.
The only exception off the top of my head is .NET support, which isn’t available as an extension.
This sentiment really resonates with me.
There are a bit too many lingering bugs in their product offerings, and they are branching out instead of cutting back and improving what's already there. VSCode is eating their lunch, and they seem to have focused to compete with the things VSCode does, for a while, and now on the AI race (like all companies).
Webstorm still randomly freezes when my Macbook goes to sleep. This is only happening the last 1 year or so.
Exactly. I was testing out betas of their newer IDE (Fleet) about ~2 years ago and was really impressed. The AI stuff is really distracting people.
The rate of Rider crashes on OSX skyrocketed recently, around the beginning of this year. Super disappointing and frustrating.
But aren't you excited that with new higher level language you can have higher level disappointments and frustrations.
You're absolutely right!
About 80% of the code in our repos is language or framework baggage, delivers fuck all ROI and the LLM leverage is making about 25% of what is left go away, badly and without any determinism.
Whether or not this is the solution, I don't know, but it feels like the right direction.
Please don’t have it run in a virtual machine. Or at least the JVM. All of Jetbrains products would be 10x better if not written in Java. Native is best.
Native AOT compiled code is not necessarily faster than JIT compiled code. The Java JIT engine can do a lot of optimisations that would not be possible with AOT compilation. In the end it's compiled down to machine code anyway.
Still waiting for them to prove how Kotlin Native is so much better than Java.
Ruby?
> the object-oriented architecture
Stopped reading here. Good luck with your shared mutable state! Enjoy!
Seems to work alright for all major GUI platforms in use in the market.
Not true for react, swiftui, jetback compose, etc.
Everything else was built in the 90s or early 00s which was peak OOP
Javascript is a protoype based OOP language, and the DOM that any Web UI depends on is OOP.
All commercial browsers are written in a mix of JavaScript OOP and C++, nowadays they might have some tiny percentage of Rust, which might be using traits and dynamic dispatch anyway.
Kotlin runs on top of a OOP based VM, and uses a OOP class library. Compose has plenty of classes on its implementation and Android Studio interop.
SwiftUI infrastructure relies on OOP code written in Objective-C and macOS frameworks.
People are usually so eager to bash OOP that they overlook the details.
So? Is OOP widespread? Yes of course it is.
Besides that what is really the point you are making? That you can write software using Smalltalk or CLOS or any of their bastard offspring? Yes, you can and people did and still do. So what?
The point was all your examples are flawed and there is no mainstream UI that doesn't have OOP concepts powering their execution, either via the programming language, the standard library, or OS APIs being used across the stack.
You can leverage objects in an immutable and shared-nothing type of way. These ideas are not mutually exclusive. https://www.destroyallsoftware.com/screencasts/catalog/funct...
Yes you can and I do often. Would I build my new high-level lang around OOP? Nop
[dead]
That is another waterfall fantasy paired with Applescript/UML delusions. Perhaps they are seeking vibe investors or they'll indeed offer a toy IDE for vibe coders alongside with their real product.