As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs. The last time I booted it up, I tried to do some basic string manipulation just to test stuff out, but spent an hour puzzling out why `var x = 'hello'; print(x[3])` didn't work, and neither did `len(x)` (turns out they'd opted for more specific byte-vs-codepoint representations, but the docs contradicted the actual implementation).
Hopefully they get Mojo to a good place for more general ML, but at the moment it still feels quite limited - they've actually deprecated some of the nice builtins they had for Tensors etc... For now I'll stick with JAX and check in periodically, fingers crossed.
Advertising prominently with "AI native" seems necessary today, at least for some folks. To me, that's kind of off-putting, since it doesn't really say anything.
Can anyone of the AI enthusiasts here explain, why, or, what is meant by
> As a compiled, statically-typed language, it's also ideal for agentic programming.
It's been really interesting to see all the desperation on hero pages for all these products and services ever since AI came into prominence. I think the funniest for me was opening IBM DB2 product page and seeing it labeled as 'AI database'. Hysterical.
> why, or, what is meant by
More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.
I don't know what they meant by it, and I share your opinion that "AI native" is somewhat meaningless for a programming language like this.
Regarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.
What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.
I've recently used Claude to write quite a bit of mojo (https://github.com/boxed/TurboKod) and I can quite confidently say that Claude will write deprecated mojo syntax a lot, but the compiler tells it and it fixes it pretty fast too. The only reason I notice is that I look at Claude while it's working and I see the compilation warnings (and sometimes Claude is lazy and doesn't compile so I have to see it).
But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.
Because a coding agent (when instructed well) will try to make a piece of code work in a loop. Static typing and compilation help in the process (no more undefined variables discovered at runtime for instance). But that’s not bullet proof at all as most of us know
I’m relatively new to programming but I wish they had used a functional language syntax rather than an object oriented one as the basis for mojo.
From my experience, AI revolves a lot around building up function pipelines, computing their derivatives, and passing tons of data through them; which composability and higher order functions from functional programming make it a breeze to describe.
I also feel that other fields than AI are moving towards building up large functional pipelines to produce outputs, which would make mojo suitable for those fields as well. I’m building in the space of CAD for example and I’d love to use a “functional mojo” language.
I'm in the same boat, this would've been in the family of the first language that neural nets and AI were created with back decades ago, Lisp. Coming from the awesome project of Swift, which to their credit, was a massive undertaking to convince Apple execs, I was still hoping for a functional language approach like Haskell with the practicality of Clojure.
When I first heard about Mojo I somehow got the impression that they intended to make it compatible with existing Python code. But it seems like they are very far away from that for the foreseeable future. I guess you can call back and forth between Python and Mojo but Mojo itself can't run existing Python code.
They also advertised a 36,000x speedup over equivalent Python if I remember correctly, without at any point clarifying that this could only be true in extreme edge cases. Feels more like a pump-dump cryptography scheme than an honest attempt to improve the Python ecosystem.
Well... the article made self deprecating fun of the click bait title, showed the code every step of the way, and actually did achieve the claim (albeit with wall clock time, not CPU/GPU time).
And it wasn't "equivalent python", whatever that means, they did loop unrolling and SIMD and stuff. That can't be done in pure python at all, so there literally is no equivalent python.
In their original pitch that was definitely part of it: take Python code, add type hints, get a big speedup. As they've built it out it seems to have diverged.
Python interop
> Mojo natively interoperates with Python so you can eliminate performance bottlenecks in existing code without rewriting everything. You can start with one function, and scale up as needed to move performance-critical code into Mojo. Your Mojo code imports naturally into Python and packages together for distribution. Likewise, you can import libraries from the Python ecosystem into your Mojo code.
The communication had me try to run some very simple python code assuming it of course should run (reading files line by line), which didn't work at all.
For me this was a big disappointment, and I wonder how much this has backfired across developers.
Sadly for them, Nvidia didn't stay still in the meantime and created the next generation of CUDA, CuTile for Python and soon for C++, through CUDA Tile IR (using a similar compiler stack based on MLIR).
Event though it's not portable, it will likely have far greater usage than Mojo just by being heavely promoted by Nvidia, integrated in dev tools and working alongside existing CUDA code.
Tile IR was more likely a response to the threat of Triton rather than Mojo, at least from the pov of how easy is to write a decently performing LLM kernel.
People keep mistaking Mojo as good syntax for writing GPU code, and so imagine Nvidia's Python frameworks already do that. But... would CuTile work on AMD GPUs and Apple Silicon? Whatever Nvidia does will still have vendor lock-in.
As someone who would have strong reasons to invest time in Modular (simple high performant language for implementing bioinformatics scripts), I would say primarily the worry that development might be too tied to Modular, the startup behind it, which eventually might pivot into other priorities.
One would want to see either a strong community build up around it, or really hard evidence for a long-term commitment to the language from Modular. And the latter will take a long time to be assured of I think.
Also, editing tools need to catch up before very wide adoption of a language with a lot of new syntax.
I have no time for or interest in proprietary compilers. The standard library is Apache 2, but the license link on their home page is to a long terms of service thing. I’d like to be wrong because it looks interesting. Until then, this doesn’t exist in my world.
I bet that’s true for a great many people. There are too many wonderful FOSS languages to bother with one you can’t fix or adapt or share.
I do wonder if Mojo was a great idea just a little too late to the party. Porting ‘prototypes’ from Python to lower level languages is fairly trivial now with LLMs.
Right now majority of beginners start programming with a high-level language, say Python or JavaScript - then for more advanced system-level tasks pickup C/C++/Rust/Zig etc.
If Mojo succeeds, it could be the one language spanning across those levels, while simplifying heterogeneous hardware programming.
I remember reading about this 4 years ago as the new Chris Lattner project and was super excited, though a little skeptical.
I think that nowadays with vibe/agentic coding, high performance Python-like languages become ever more important. Directly using AI agents to code, say, C++, is painful as the verbose nature of the language often causes the context window to explode.
Am I old or remembering this wrong... didn't Zuck write the first iteration of Facebook in PHP, and then spend millions to hire people to write something that converted the code to C++?
I have tons of experience with python, possibly more actual work experience than any other language, and I do think the indentation is a bit of a problem. Obviously not a huge one, but still something I wished they had done differently. Because I like to have a robust format-on-save wired into my editor, and you just cannot quite have that when indentation is meaningful.
If you're looking for a language that aims to solve the "two-language problem" like Mojo, but want something more open, more mature and less influenced by VC funding, check out Julia: https://julialang.org/
I used Julia a lot when I was studying statistics (which I dropped out of) back in 2015, but I recently (like last weekend) came back to it to write a prototype of a supervised learning model, and I have to say, coming back to it was pure joy. And my model prototype was indeed fast enough for me.
Now I will probably rewrite the model in rust if I want to do anything with it (mostly for the web assembly target as I want this thing to run in browsers) but I will for sure be using Julia for further experimentation. Lovely language.
I am actually on a lookout for a low level language which compiles to web assembly to write a (relatively small) supervised learning model which I plan to be good enough for 5 year old phone CPUs. I have a working prototype in Julia and was planning on (eventually) rewrite it in Rust mostly for the web assembly target. I come from a high level language background so the thought of rewriting in rust is a little daunting. So I was excited to learn about Mojo and find out if they had a WebAssembly target in their compiler.
But then I read this:
> AI native
> Mojo is built from the ground up to deliver the best performance on the diverse hardware that powers modern AI systems. As a compiled, statically-typed language, it's also ideal for agentic programming.
Well, no thank you. I know the irony here but I want nothing to do with a language made for robots.
I’ve written Python for the past 25 years or so. I dig it. But I don’t think I’ve started a new Python project since starting to experiment with Rust. A lot (not all!, but a lot) of Rust patterns look a lot like Python if you squint at it just right. I also think that writing lots of Rust has made me better at writing Python. The things Rust won’t let you get away with are things you shouldn’t be doing almost anywhere else.
Go on, give it a shot. It stops being intimidating soon! And remember that the uv we all love was heavily influenced by Cargo.
I actually have written Rust, but it has been a minute. I think my last project (a backend for a massive online multiplayer theremin jam session [site no longer up; but HN discussion still exists: https://news.ycombinator.com/item?id=10875211] 10 years ago).
I remember Rust very fondly in fact. And I had the same experience as you, learning Rust made me a better Javascript programmer. Lets see if a little neural network can be as fun.
As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs. The last time I booted it up, I tried to do some basic string manipulation just to test stuff out, but spent an hour puzzling out why `var x = 'hello'; print(x[3])` didn't work, and neither did `len(x)` (turns out they'd opted for more specific byte-vs-codepoint representations, but the docs contradicted the actual implementation).
Hopefully they get Mojo to a good place for more general ML, but at the moment it still feels quite limited - they've actually deprecated some of the nice builtins they had for Tensors etc... For now I'll stick with JAX and check in periodically, fingers crossed.
> We have committed to open-sourcing Mojo in Fall 2026.
https://docs.modular.com/mojo/faq/#will-mojo-be-open-sourced
Advertising prominently with "AI native" seems necessary today, at least for some folks. To me, that's kind of off-putting, since it doesn't really say anything.
Can anyone of the AI enthusiasts here explain, why, or, what is meant by
> As a compiled, statically-typed language, it's also ideal for agentic programming.
It's been really interesting to see all the desperation on hero pages for all these products and services ever since AI came into prominence. I think the funniest for me was opening IBM DB2 product page and seeing it labeled as 'AI database'. Hysterical.
> why, or, what is meant by More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.
I don't know what they meant by it, and I share your opinion that "AI native" is somewhat meaningless for a programming language like this.
Regarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.
What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.
I've recently used Claude to write quite a bit of mojo (https://github.com/boxed/TurboKod) and I can quite confidently say that Claude will write deprecated mojo syntax a lot, but the compiler tells it and it fixes it pretty fast too. The only reason I notice is that I look at Claude while it's working and I see the compilation warnings (and sometimes Claude is lazy and doesn't compile so I have to see it).
But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.
I don’t really consider myself an “AI enthusiasts”, but I do use it.
So, agents tend to do better the more feedback they can get. Type checking is pretty good for catching a bunch of dumb mistakes automatically.
The point is more hints for the agent is more better most of the time.
Because a coding agent (when instructed well) will try to make a piece of code work in a loop. Static typing and compilation help in the process (no more undefined variables discovered at runtime for instance). But that’s not bullet proof at all as most of us know
I’m relatively new to programming but I wish they had used a functional language syntax rather than an object oriented one as the basis for mojo.
From my experience, AI revolves a lot around building up function pipelines, computing their derivatives, and passing tons of data through them; which composability and higher order functions from functional programming make it a breeze to describe.
I also feel that other fields than AI are moving towards building up large functional pipelines to produce outputs, which would make mojo suitable for those fields as well. I’m building in the space of CAD for example and I’d love to use a “functional mojo” language.
I'm in the same boat, this would've been in the family of the first language that neural nets and AI were created with back decades ago, Lisp. Coming from the awesome project of Swift, which to their credit, was a massive undertaking to convince Apple execs, I was still hoping for a functional language approach like Haskell with the practicality of Clojure.
When I first heard about Mojo I somehow got the impression that they intended to make it compatible with existing Python code. But it seems like they are very far away from that for the foreseeable future. I guess you can call back and forth between Python and Mojo but Mojo itself can't run existing Python code.
They also advertised a 36,000x speedup over equivalent Python if I remember correctly, without at any point clarifying that this could only be true in extreme edge cases. Feels more like a pump-dump cryptography scheme than an honest attempt to improve the Python ecosystem.
Well... the article made self deprecating fun of the click bait title, showed the code every step of the way, and actually did achieve the claim (albeit with wall clock time, not CPU/GPU time).
And it wasn't "equivalent python", whatever that means, they did loop unrolling and SIMD and stuff. That can't be done in pure python at all, so there literally is no equivalent python.
In their original pitch that was definitely part of it: take Python code, add type hints, get a big speedup. As they've built it out it seems to have diverged.
From the site:
Python interop > Mojo natively interoperates with Python so you can eliminate performance bottlenecks in existing code without rewriting everything. You can start with one function, and scale up as needed to move performance-critical code into Mojo. Your Mojo code imports naturally into Python and packages together for distribution. Likewise, you can import libraries from the Python ecosystem into your Mojo code.
That was what was originaly advertised, they wanted to be what Kotlin is to Java but for Python. They quickly turned tails on this.
That and the not completely open source development model is what has always felt very vaporwary to me.
isn't that achieved by Codon?
The communication had me try to run some very simple python code assuming it of course should run (reading files line by line), which didn't work at all.
For me this was a big disappointment, and I wonder how much this has backfired across developers.
Really the only thing good about Python is its ecosystem.
but that ecosystem is realy good.
Sadly for them, Nvidia didn't stay still in the meantime and created the next generation of CUDA, CuTile for Python and soon for C++, through CUDA Tile IR (using a similar compiler stack based on MLIR).
Event though it's not portable, it will likely have far greater usage than Mojo just by being heavely promoted by Nvidia, integrated in dev tools and working alongside existing CUDA code.
Tile IR was more likely a response to the threat of Triton rather than Mojo, at least from the pov of how easy is to write a decently performing LLM kernel.
People keep mistaking Mojo as good syntax for writing GPU code, and so imagine Nvidia's Python frameworks already do that. But... would CuTile work on AMD GPUs and Apple Silicon? Whatever Nvidia does will still have vendor lock-in.
Interesting, how big impact is CuTile?
I was excited when Mojo launched and thought it might grow big quick. I don't see much traction. The pitch is compelling. What could be the issue?
As someone who would have strong reasons to invest time in Modular (simple high performant language for implementing bioinformatics scripts), I would say primarily the worry that development might be too tied to Modular, the startup behind it, which eventually might pivot into other priorities.
One would want to see either a strong community build up around it, or really hard evidence for a long-term commitment to the language from Modular. And the latter will take a long time to be assured of I think.
Also, editing tools need to catch up before very wide adoption of a language with a lot of new syntax.
I have no time for or interest in proprietary compilers. The standard library is Apache 2, but the license link on their home page is to a long terms of service thing. I’d like to be wrong because it looks interesting. Until then, this doesn’t exist in my world.
I bet that’s true for a great many people. There are too many wonderful FOSS languages to bother with one you can’t fix or adapt or share.
Mojo is still NOT open source (the standard library is but not the compiler). Open source is table stakes for a modern programming language.
When it was announced it was not generally available for everyone to try out. There was a waitlist phase.
I do wonder if Mojo was a great idea just a little too late to the party. Porting ‘prototypes’ from Python to lower level languages is fairly trivial now with LLMs.
Right now majority of beginners start programming with a high-level language, say Python or JavaScript - then for more advanced system-level tasks pickup C/C++/Rust/Zig etc.
If Mojo succeeds, it could be the one language spanning across those levels, while simplifying heterogeneous hardware programming.
I remember reading about this 4 years ago as the new Chris Lattner project and was super excited, though a little skeptical.
I think that nowadays with vibe/agentic coding, high performance Python-like languages become ever more important. Directly using AI agents to code, say, C++, is painful as the verbose nature of the language often causes the context window to explode.
Not to mention that C++ basically can't be made to be safe. But Rust is probably fine.
Am I old or remembering this wrong... didn't Zuck write the first iteration of Facebook in PHP, and then spend millions to hire people to write something that converted the code to C++?
https://en.wikipedia.org/wiki/Hack_(programming_language)
Does it have the indentation thing? That would be a no go for a lot of people
Only incredibly inexperienced people think indentation in python is a problem.
I have tons of experience with python, possibly more actual work experience than any other language, and I do think the indentation is a bit of a problem. Obviously not a huge one, but still something I wished they had done differently. Because I like to have a robust format-on-save wired into my editor, and you just cannot quite have that when indentation is meaningful.
Very bold of them expecting people to use a language with a closed source compiler in the 2020s.
If you're looking for a language that aims to solve the "two-language problem" like Mojo, but want something more open, more mature and less influenced by VC funding, check out Julia: https://julialang.org/
I used Julia a lot when I was studying statistics (which I dropped out of) back in 2015, but I recently (like last weekend) came back to it to write a prototype of a supervised learning model, and I have to say, coming back to it was pure joy. And my model prototype was indeed fast enough for me.
Now I will probably rewrite the model in rust if I want to do anything with it (mostly for the web assembly target as I want this thing to run in browsers) but I will for sure be using Julia for further experimentation. Lovely language.
from what I understand the goal for now is not to get the people to use it, but for enthusiasts to try it
What enthusiast worth getting feedback from is going to tinker with a locked up language?
You'd be surprised. Anyway, the compiler will be opened with 1.0 release, that's why reaching beta is exciting.
They've said they'll open source the compiler alongside the 1.0 release.
I am actually on a lookout for a low level language which compiles to web assembly to write a (relatively small) supervised learning model which I plan to be good enough for 5 year old phone CPUs. I have a working prototype in Julia and was planning on (eventually) rewrite it in Rust mostly for the web assembly target. I come from a high level language background so the thought of rewriting in rust is a little daunting. So I was excited to learn about Mojo and find out if they had a WebAssembly target in their compiler.
But then I read this:
> AI native
> Mojo is built from the ground up to deliver the best performance on the diverse hardware that powers modern AI systems. As a compiled, statically-typed language, it's also ideal for agentic programming.
Well, no thank you. I know the irony here but I want nothing to do with a language made for robots.
I’ve written Python for the past 25 years or so. I dig it. But I don’t think I’ve started a new Python project since starting to experiment with Rust. A lot (not all!, but a lot) of Rust patterns look a lot like Python if you squint at it just right. I also think that writing lots of Rust has made me better at writing Python. The things Rust won’t let you get away with are things you shouldn’t be doing almost anywhere else.
Go on, give it a shot. It stops being intimidating soon! And remember that the uv we all love was heavily influenced by Cargo.
If you’re searching for a language that has the same strong memory safety than rust but is a bit easier to write, you should give Swift a go.
I actually have written Rust, but it has been a minute. I think my last project (a backend for a massive online multiplayer theremin jam session [site no longer up; but HN discussion still exists: https://news.ycombinator.com/item?id=10875211] 10 years ago).
I remember Rust very fondly in fact. And I had the same experience as you, learning Rust made me a better Javascript programmer. Lets see if a little neural network can be as fun.
>No more choosing between productivity and performance - Mojo gives you both.
That's a very big claim.