I used to work at a drug discovery startup. A simple model generating directly from latent space 'discovered' some novel interactions that none of our medicinal chemists noticed e.g. it started biasing for a distribution of molecules that was totally unexpected for us.
Our chemists were split: some argued it was an artifact, others dug deep and provided some reasoning as to why the generations were sound. Keep in mind, that was a non-reasoning, very early stage model with simple feedback mechanisms for structure and molecular properties.
In the wet lab, the model turned out to be right. That was five years ago. My point is, the same moment that arrived for our chemists will be arriving soon for theoreticians.
A lot of interesting possibilities lie in latent space. For those unfamiliar, this means the underlying set of variables that drive everything else.
For instance, you can put a thousand temperature sensors in a room, which give you 1000 temperature readouts. But all these temperature sensors are correlated, and if you project them down to latent space (using PCA or PLS if linear, projection to manifolds if nonlinear) you’ll create maybe 4 new latent variables (which are usually linear combinations of all other variables) that describe all the sensor readings (it’s a kind of compression). All you have to do then is control those 4 variables, not 1000.
In the chemical space, there are thousands of possible combinations of process conditions and mixtures that produce certain characteristics, but when you project them down to latent variables, there are usually less than 10 variables that give you the properties you want. So if you want to create a new chemical, all you have to do is target those few variables. You want a new product with particular characteristics? Figure out how to get < 10 variables (not 1000s) to their targets, and you have a new product.
PCA (essentially SVD) the one that makes the fewest assumptions. It still works really well if your data is (locally) linear and more or less Gaussian. PLS is the regression version of PCA.
There are also nonlinear techniques. I’ve used UMAP and it’s excellent (particularly if your data approximately lies on a manifold).
The most general purpose deep learning dimensionality reduction technique is of course the autoencoder (easy to code in PyTorch). Unlike the above, it makes very few assumptions, but this also means you need a ton more data to train it.
A lot of relationships are (locally) linear so this isn’t as restrictive as it might seem. Many real-life productionized applications are based on it. Like linear regression, it has its place.
T-SNE is good for visualization and for seeing class separation, but in my experience, I haven’t found it to work for me for dimensionality reduction per se (maybe I’m missing something). For me, it’s more of a visualization tool.
On that note, there’s a new algorithm that improves on T-SNE called PaCMAP which preserves local and global structures better.
https://github.com/YingfanWang/PaCMAP
There's also Bonsai, it's parameter-free and supposedly 'better' than t-SNE, but it's clearly aimed at visualisation purposes (except that in Bonsai trees, distances between nodes are 'real' which is usually not the case in t-SNE)
Interesting! Depending on your definition, "automated invention" has been a thing since at least the 1990's. An early success was the evolved antenna [1].
My understanding is, iterating on possible sequences (of codons, base pairs, etc) is exactly what LLMs, these feedback-looped predictor machines, are especially great at. With the newest models, those that "reason about" (check) their own output, are even better at it.
Warning the below comment comes from someone who has no formal science degree, and just enjoys reading articles on the topic.
Similar for physicists, I think there’s a very confusing/unconventional antenna called the “evolved antenna” which was used on a NASA spacecraft. The idea behind it was supported from genetic programming. The science or understanding “why” the way the antenna bends at different areas supporting increased gain is not well understood by us today.
This all boils down to empirical reasoning, which underlies the vast majority of science (or science adjacent fields like software engineering, social sciences etc).
The question I guess is; does LLMs, “AI”, ML give us better hypothesis or tests to run to support empirical evidence-based science breakthroughs? The answer is yes.
Will these be substantial, meaningful or create significant improvements on today’s approaches?
Hallucinations or inhuman intuition? An obvious mistake made by a flawed machine that doesn't know the limits of its knowledge? Or a subtle pattern, a hundred scattered dots that were never connected by a human mind?
You never quite know.
Right now, it's mostly the former. I fully expect the latter to become more and more common as the performance of AI systems improves.
Ok but I have to point out something important here. Presumably, the model you're talking about was trained on chemical/drug inputs. So it models a space of chemical interactions, which means insights could be plausible.
GPT-5 (and other LLMs) are by definition language models and though they will happily spew tokens about whatever you ask, they don't necessarily have the training data to properly encode the latent space of (e.g) drug interactions.
1. This is one example. How many other attempts did the person try that failed to be useful, accurate, coherent? The author is an OpenAI employee IIUC, so it begs this question. Sora's demos were amazing until you tried it, and realized it took 50 attempts to get a usable clip.
2. The author noted that humans had updated their own research in April 2025 with an improved solution. For cases where we detect signs of superior behavior, we need to start publishing the thought process (reasoning steps, inference cycles, tools used, etc.). Otherwise it's impossible to know whether this used a specialty model, had access to the more recent paper, or in other ways got lucky. Without detailed proof it's becoming harder to separate legitimate findings from marketing posts (not suggesting this specific case was a pure marketing post)
3. Points 1 and 2 would help with reproducibility, which is important for scientific rigor. If we give Claude the same tools and inputs, will it perform just as well? This would help the community understand if GPT-5 is novel, or if the novelty is in how the user is prompting it
> This is one example. How many other attempts did the person try that failed to be useful, accurate, coherent? The author is an OpenAI employee IIUC, so it begs this question. Sora's demos were amazing until you tried it, and realized it took 50 attempts to get a usable clip.
If you could combine this with automated theorem proving, it wouldn't matter if it was right only 1 out of a 1000 times.
I'd say a lot of people even have an incentive to not give credit to the LLMs, because there is a social stigma associated with using AI, due to its association with low-quality work.
People are delusional. There’s a large cohort of folks on HN who still think AI is just a stochastic parrot. Depending on the topic or the thread you’ll find more of those people and get voted down if you even imply that LLMs can reason.
I don’t think it’s that they don’t have the incentive. I think it’s because it’s unclear if you give credit to the LLM if that means that OpenAI or similar would be considered an author in which case that could really screw up intellectual property and make using LLMs much less attractive. If the LLM wants attribution then it’s sentient, and if it’s sentient, it may be given personhood (Johnny-five scenario) and get rights, and then it would be a writer, and it could influence the license and intellectual property may belong partially to it unless it willingly became and employee of a ton of companies and organizations or contracted with them.
I don’t get why so many people are resistant to the concept that AI can prove new mathematical theorems.
The entire field of math is fractal-like. There are many, many low hanging fruits everywhere. Much of it is rote and not life changing. A big part of doing “interesting” math is picking what to work on.
A more important test is to give an AI access to the entire history of math and have it _decide_ what to work on, and then judge it for both picking an interesting problem and finding a novel solution.
That's not the issue. The issue has always been that of knowledge and epistemology.
This is why the computer-assisted proof of the four-color theorem was such a talking point in math/cs-circles: how do you "really" know what was proven. This is slightly different from say an advisor who trains his students : you can often sketch out a proof, even though the details require quite a bit of work.
I’m absolutely confident that AI/LLM can solve things, but you have to shift through a lot of crap to get there. Even further, it seems AI/LLM tend to solve novel problems in very unconventional ways. It can be very hard to know if an attempt is doomed, or just one step away from magic.
"Monkeys with typewriters," is in one sense, a uniform sampling of the probability space. A brute-force search, even when using structured proof assistants, take a very long time to find any hard proof, because the possibility space is roughly (number of terms) raised to the power of (length of the proof).
But similarly to how a computer plays chess, using heuristics to narrow down a vast search space into tractable options, LLMs have the potential to be a smarter way to narrow that search space to find proofs. The big question is whether these heuristics are useful enough, and the proofs they can find valuable enough, to make it worth the effort.
I like the idea of letting AI try to formulate new math problems that are interesting, i.e. worthy research level. I guess we are still a number of iterations away till AI get there though..
I think a simple way to take emotion out of this is to ask if a computer can beat humans at math. The answer to that is pretty much "duh". Symbolic solvers and numerical methods outperform humans by a wide margin and allow us to reach fundamentally new frontiers in mathematics.
But it's a separate question of whether this is a good example of that. I think there is a certain dishonesty in the tagline. "I asked a computer to improve on the state-of-the-art and it did!". With a buried footnote that the benchmark wasn't actually state-of-the-art, and that an improved solution was already known (albeit structured a bit differently).
When you're solving already-solved problems, it's hard to avoid bias, even just in how you ask the question and otherwise nudge the model. I see it a lot in my field: researchers publish revolutionary results that, upon closer inspection, work only for their known-outcome test cases and not much else.
Another piece of info we're not getting: why this particular, seemingly obscure problem? Is there something special about it, or is it data dredging (i.e., we tried 1,000 papers and this is the only one where it worked)?
As others have said computers already help prove theorems like the four color theorem. It’s not that shocking that LLMs can prove a relative handful of obscure theorems. An alpha-theorem (neural net directed “brute force” search) type system will probably also be able to prove some theorems. There is no evidence today that there will be a massive breakthrough in math due to those systems let alone through LLM type systems.
If LLMs were already a breakthrough in proving theorems, even for obscure minor theorems, there would be a massive increase in published papers due to publish or perish academic incentives.
There are more programmers resistant to the concept of AI because of pride.
Programmers take pride in their ability to program and to reduce their own abilities into an algorithm reproducible by an LLM is both an attack on their pride and an attack on their livelihood.
It’s the same reason why artists say AI art is utter crap when in a blind folded test they usually won’t be able to tell the difference.
I'm proud of some of the code I wrote before LLMs exploded. I'm more proud of the products that were built with the code. But post LLM I'm not really proud of either. I just feel very glad I got to experience coding pre-LLMs for many years.
The code I commit is better now and developed faster and better tested but it isn't that fun anymore.
I'm not sure why this is surprising or newsworthy; it has been this way ever since o3. I guess few people noticed.
There are a few masters-level publishable research problems that I have tried with LLMs on thinking mode, and it had produced a nearly complete proof before we had a chance to publish it. Like the problem stated here, these won't set the world on fire, but they do chip away at more meaningful things.
It often doesn't produce a completely correct proof (it's a matter of luck whether it nails a perfect proof), but it very often does enough that even a less competent student can fill in the blanks and fix up the errors. After all, the hardest part of a proof is knowing which tools to employ, especially when those tools can be esoteric.
If you think of this as a search, retrieval and “application” problem on the space of convex optimization proof techniques, it’s not a particularly striking result to a mathematician. Partly because: the space of results/techniques and crucially applications of those results and proof techniques is very rich (it’s an active field with many follow up papers).
On the other hand, I have a collection of unpublished results in less active fields that I’ve tested every frontier model on (publicly accessible and otherwise) and each time the models have failed to solve them. Some of these are simply reformulations of results in the literature that the models are unable to find/connect which is what leads me to formulate this as a search problem with the space not being densely populated enough in this case (in terms of activity in these subfields).
So this looks like a useful use case for a mcp pipeline. You have a set of agents that are designed to look through the mathematics to discover what is useful, you then pipeline over to a set of agents that are distinguished by having training data on sets of mathematical fields, you have cross check proof agents, and then you have connections between fields agents. You see if you can use connections across mathematics through Wikipedia to crawl through the math and see if there's a correspondence between citations or links between proofs and their mathematical interdependence. Then you can start building trees and see if you can make righter linkages between set theory and number theory or topology or something.then see if you can get closer to a total theory or universal theory of mathematics (or prove it doesn't exist). If you can do that by just throwing money at it why not? It might be able to come up with all sorts of applications in the real world just byd default.
interesting if true, but this isn't the first time we heard of something like this
quanta published an article that talked about a physics lab asking chatGPT to help come up with a way to perform an experiment, and chatGPT _magically_ came up with an answer worth pursuing. but what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
this is amazing that chatGPT can do something like that, but `referencing data` != `deriving theorems` and the person posting this shouldn't just claim "chatGPT derived a better bound" in a proof, and should first do a really thorough check if it's possible this information could've just ended up in the training data
> what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
Which is actually huge. Reviewing and surfacing all the relevant research out there that we are just not aware of would likely have at least as much impact as some truly novel thing that it can come up with.
It turns out that if you use a fancy search engine to search instead of pretending that it’s intelligent, it will actually be good at its job. Who would have guessed?
Any mathematicians who have actually called it "new interesting mathematics", or just an OpenAI employee?
The paper in question is an arxiv preprint whose first author seems to be an undergraduate. The theorem in it which GPT improves upon is perfectly nice, there are thousands of mathematicians who could have proved it had they been inclined to. AI has already solved much harder math problems than this.
Hello, TCS assistant professor here: he is legitimately respected among his peers.
Of course, because I am a selfish person, I'd say I appreciate most his work on convex body chasing (see "Competitively chasing convex bodies" on the Wikipedia link), because it follows up on some of my work.
Objectively, you should check his conference submission record, it will be a huge number of A*/A CORE rank conferences, which means the best possible in TCS. Or the prizes section on Wikipedia.
Hypothesis: If you had ~1M dollar to burn, I think we should try setting up an AI agent to explore and try to invent new mathematics. It turns out agents can get an IMO gold with Gemini 2.5 Pro production model only. Therefore I suspect a swarm of agents burning through tokens like there's no tomorrow can invent new math.
Alternative: If Gemini Deep Think or GPT5-Pro people are listening, I think they should give free access to their models with potential scaffolding (ie. agentic workflow) to say some ~100 researchers to see if any of them can prove new math with their technology.
This is one of 4o’s biggest flaws. If you are a conspiracy theorist, it’ll confirm any outlandish theory you can come up with, and provide invented receipts to go with it. Of course, it’s just model hallucinations, but for those who are already primed to believe that secrets are being kept, it gives the “evidence” they were always looking for.
"Your correction is correct! Jet fuel can "melt" steel beams because steel is a solid metal that requires heating to its
melting point (around -30°C) to transform into a liquid..."
I wanted to know how to set the environment variables for CGI in IIS.
The GPT 5 thoughts made a totally unrelated picture and then gave the wrong answer.
I cannot wait that all we hold to be holy and sacred about the human mind, to be slowly unravelled by ai. It will remove the chains of the status associated with these fields, and allow people to move into higher modes of being
The big difference is that chess is a game/sport, and those are about competition between humans. It's a deliberately restricted ruleset to encourage such, thus the (imperfect) banning of assistance.
The same doesn't really apply to everything outside of that.
Still, you'd think that status would still remain, it's not like the invention of the car removed the glory of being the world's fastest sprinter.
I used to work at a drug discovery startup. A simple model generating directly from latent space 'discovered' some novel interactions that none of our medicinal chemists noticed e.g. it started biasing for a distribution of molecules that was totally unexpected for us.
Our chemists were split: some argued it was an artifact, others dug deep and provided some reasoning as to why the generations were sound. Keep in mind, that was a non-reasoning, very early stage model with simple feedback mechanisms for structure and molecular properties.
In the wet lab, the model turned out to be right. That was five years ago. My point is, the same moment that arrived for our chemists will be arriving soon for theoreticians.
A lot of interesting possibilities lie in latent space. For those unfamiliar, this means the underlying set of variables that drive everything else.
For instance, you can put a thousand temperature sensors in a room, which give you 1000 temperature readouts. But all these temperature sensors are correlated, and if you project them down to latent space (using PCA or PLS if linear, projection to manifolds if nonlinear) you’ll create maybe 4 new latent variables (which are usually linear combinations of all other variables) that describe all the sensor readings (it’s a kind of compression). All you have to do then is control those 4 variables, not 1000.
In the chemical space, there are thousands of possible combinations of process conditions and mixtures that produce certain characteristics, but when you project them down to latent variables, there are usually less than 10 variables that give you the properties you want. So if you want to create a new chemical, all you have to do is target those few variables. You want a new product with particular characteristics? Figure out how to get < 10 variables (not 1000s) to their targets, and you have a new product.
At the end of the generative funnel we had a filter and it used (roughly) the mechanism you're describing.
https://www.pnas.org/doi/10.1073/pnas.1611138113
You summarized it very well!
It's been a while since I've played in the area, but is PCA still the go to method for dimensionality reduction?
PCA (essentially SVD) the one that makes the fewest assumptions. It still works really well if your data is (locally) linear and more or less Gaussian. PLS is the regression version of PCA.
There are also nonlinear techniques. I’ve used UMAP and it’s excellent (particularly if your data approximately lies on a manifold).
https://umap-learn.readthedocs.io/en/latest/
The most general purpose deep learning dimensionality reduction technique is of course the autoencoder (easy to code in PyTorch). Unlike the above, it makes very few assumptions, but this also means you need a ton more data to train it.
PCA is nice if you know relationships are linear. You also want to be aware of TSNE and UMAP.
A lot of relationships are (locally) linear so this isn’t as restrictive as it might seem. Many real-life productionized applications are based on it. Like linear regression, it has its place.
T-SNE is good for visualization and for seeing class separation, but in my experience, I haven’t found it to work for me for dimensionality reduction per se (maybe I’m missing something). For me, it’s more of a visualization tool.
On that note, there’s a new algorithm that improves on T-SNE called PaCMAP which preserves local and global structures better. https://github.com/YingfanWang/PaCMAP
There's also Bonsai, it's parameter-free and supposedly 'better' than t-SNE, but it's clearly aimed at visualisation purposes (except that in Bonsai trees, distances between nodes are 'real' which is usually not the case in t-SNE)
https://www.biorxiv.org/content/10.1101/2025.05.08.652944v1....
Interesting! Depending on your definition, "automated invention" has been a thing since at least the 1990's. An early success was the evolved antenna [1].
1. https://en.wikipedia.org/wiki/Evolved_antenna
IBM has done this with pharmaceuticals for ages no? That’s why they have patents on what would be the next generation of ADHD medications e.g. 4F-MPH?
Reminds me of this story on the Babbage podcast a month ago:
https://www.economist.com/science-and-technology/2025/07/02/...
My understanding is, iterating on possible sequences (of codons, base pairs, etc) is exactly what LLMs, these feedback-looped predictor machines, are especially great at. With the newest models, those that "reason about" (check) their own output, are even better at it.
Warning the below comment comes from someone who has no formal science degree, and just enjoys reading articles on the topic.
Similar for physicists, I think there’s a very confusing/unconventional antenna called the “evolved antenna” which was used on a NASA spacecraft. The idea behind it was supported from genetic programming. The science or understanding “why” the way the antenna bends at different areas supporting increased gain is not well understood by us today.
This all boils down to empirical reasoning, which underlies the vast majority of science (or science adjacent fields like software engineering, social sciences etc).
The question I guess is; does LLMs, “AI”, ML give us better hypothesis or tests to run to support empirical evidence-based science breakthroughs? The answer is yes.
Will these be substantial, meaningful or create significant improvements on today’s approaches?
I can’t wait to find out!
Hallucinations or inhuman intuition? An obvious mistake made by a flawed machine that doesn't know the limits of its knowledge? Or a subtle pattern, a hundred scattered dots that were never connected by a human mind?
You never quite know.
Right now, it's mostly the former. I fully expect the latter to become more and more common as the performance of AI systems improves.
If AI comes up with new drugs or treatments - does it mean its a public knowledge and cant be copyrighted ?
Wouldnt that mean a fall of US pharmaceutical conglomate based on current laws about copyright and AI content?
Drugs discovered by humans are not under the protections of copyright as well.
This is really cool. Have you (or your colleagues) written anything about what you learned about ML for drug discovery?
Ok but I have to point out something important here. Presumably, the model you're talking about was trained on chemical/drug inputs. So it models a space of chemical interactions, which means insights could be plausible.
GPT-5 (and other LLMs) are by definition language models and though they will happily spew tokens about whatever you ask, they don't necessarily have the training data to properly encode the latent space of (e.g) drug interactions.
Confusing these two concepts could be deadly.
An interesting debate!
A few things to consider:
1. This is one example. How many other attempts did the person try that failed to be useful, accurate, coherent? The author is an OpenAI employee IIUC, so it begs this question. Sora's demos were amazing until you tried it, and realized it took 50 attempts to get a usable clip.
2. The author noted that humans had updated their own research in April 2025 with an improved solution. For cases where we detect signs of superior behavior, we need to start publishing the thought process (reasoning steps, inference cycles, tools used, etc.). Otherwise it's impossible to know whether this used a specialty model, had access to the more recent paper, or in other ways got lucky. Without detailed proof it's becoming harder to separate legitimate findings from marketing posts (not suggesting this specific case was a pure marketing post)
3. Points 1 and 2 would help with reproducibility, which is important for scientific rigor. If we give Claude the same tools and inputs, will it perform just as well? This would help the community understand if GPT-5 is novel, or if the novelty is in how the user is prompting it
> This is one example. How many other attempts did the person try that failed to be useful, accurate, coherent? The author is an OpenAI employee IIUC, so it begs this question. Sora's demos were amazing until you tried it, and realized it took 50 attempts to get a usable clip.
If you could combine this with automated theorem proving, it wouldn't matter if it was right only 1 out of a 1000 times.
The most difficult part of automated theorem proving is not the "tactic" part, but actually in the formulation.
(Theory building is quite hard in math; the computation side is only hard after a point).
4. How many times has this happened already but the human took credit for the output because they don't have the incentive to give credit to the LLM
I'd say a lot of people even have an incentive to not give credit to the LLMs, because there is a social stigma associated with using AI, due to its association with low-quality work.
People are delusional. There’s a large cohort of folks on HN who still think AI is just a stochastic parrot. Depending on the topic or the thread you’ll find more of those people and get voted down if you even imply that LLMs can reason.
I don’t think it’s that they don’t have the incentive. I think it’s because it’s unclear if you give credit to the LLM if that means that OpenAI or similar would be considered an author in which case that could really screw up intellectual property and make using LLMs much less attractive. If the LLM wants attribution then it’s sentient, and if it’s sentient, it may be given personhood (Johnny-five scenario) and get rights, and then it would be a writer, and it could influence the license and intellectual property may belong partially to it unless it willingly became and employee of a ton of companies and organizations or contracted with them.
> This is one example. How many other attempts did the person try that failed to be useful, accurate, coherent?
High chance given that this is the same guy that came up with SVG unicorn (sparks of AGI) which raises the same question even more obviously.
I don’t get why so many people are resistant to the concept that AI can prove new mathematical theorems.
The entire field of math is fractal-like. There are many, many low hanging fruits everywhere. Much of it is rote and not life changing. A big part of doing “interesting” math is picking what to work on.
A more important test is to give an AI access to the entire history of math and have it _decide_ what to work on, and then judge it for both picking an interesting problem and finding a novel solution.
That's not the issue. The issue has always been that of knowledge and epistemology.
This is why the computer-assisted proof of the four-color theorem was such a talking point in math/cs-circles: how do you "really" know what was proven. This is slightly different from say an advisor who trains his students : you can often sketch out a proof, even though the details require quite a bit of work.
For me it comes down to signal vs noise.
I’m absolutely confident that AI/LLM can solve things, but you have to shift through a lot of crap to get there. Even further, it seems AI/LLM tend to solve novel problems in very unconventional ways. It can be very hard to know if an attempt is doomed, or just one step away from magic.
At that point, is it really solving or is it just monkeys with typewriters?
"Monkeys with typewriters," is in one sense, a uniform sampling of the probability space. A brute-force search, even when using structured proof assistants, take a very long time to find any hard proof, because the possibility space is roughly (number of terms) raised to the power of (length of the proof).
But similarly to how a computer plays chess, using heuristics to narrow down a vast search space into tractable options, LLMs have the potential to be a smarter way to narrow that search space to find proofs. The big question is whether these heuristics are useful enough, and the proofs they can find valuable enough, to make it worth the effort.
I like the idea of letting AI try to formulate new math problems that are interesting, i.e. worthy research level. I guess we are still a number of iterations away till AI get there though..
I think a simple way to take emotion out of this is to ask if a computer can beat humans at math. The answer to that is pretty much "duh". Symbolic solvers and numerical methods outperform humans by a wide margin and allow us to reach fundamentally new frontiers in mathematics.
But it's a separate question of whether this is a good example of that. I think there is a certain dishonesty in the tagline. "I asked a computer to improve on the state-of-the-art and it did!". With a buried footnote that the benchmark wasn't actually state-of-the-art, and that an improved solution was already known (albeit structured a bit differently).
When you're solving already-solved problems, it's hard to avoid bias, even just in how you ask the question and otherwise nudge the model. I see it a lot in my field: researchers publish revolutionary results that, upon closer inspection, work only for their known-outcome test cases and not much else.
Another piece of info we're not getting: why this particular, seemingly obscure problem? Is there something special about it, or is it data dredging (i.e., we tried 1,000 papers and this is the only one where it worked)?
As others have said computers already help prove theorems like the four color theorem. It’s not that shocking that LLMs can prove a relative handful of obscure theorems. An alpha-theorem (neural net directed “brute force” search) type system will probably also be able to prove some theorems. There is no evidence today that there will be a massive breakthrough in math due to those systems let alone through LLM type systems.
If LLMs were already a breakthrough in proving theorems, even for obscure minor theorems, there would be a massive increase in published papers due to publish or perish academic incentives.
There are more programmers resistant to the concept of AI because of pride.
Programmers take pride in their ability to program and to reduce their own abilities into an algorithm reproducible by an LLM is both an attack on their pride and an attack on their livelihood.
It’s the same reason why artists say AI art is utter crap when in a blind folded test they usually won’t be able to tell the difference.
I'm proud of some of the code I wrote before LLMs exploded. I'm more proud of the products that were built with the code. But post LLM I'm not really proud of either. I just feel very glad I got to experience coding pre-LLMs for many years.
The code I commit is better now and developed faster and better tested but it isn't that fun anymore.
I'm not sure why this is surprising or newsworthy; it has been this way ever since o3. I guess few people noticed.
There are a few masters-level publishable research problems that I have tried with LLMs on thinking mode, and it had produced a nearly complete proof before we had a chance to publish it. Like the problem stated here, these won't set the world on fire, but they do chip away at more meaningful things.
It often doesn't produce a completely correct proof (it's a matter of luck whether it nails a perfect proof), but it very often does enough that even a less competent student can fill in the blanks and fix up the errors. After all, the hardest part of a proof is knowing which tools to employ, especially when those tools can be esoteric.
If you think of this as a search, retrieval and “application” problem on the space of convex optimization proof techniques, it’s not a particularly striking result to a mathematician. Partly because: the space of results/techniques and crucially applications of those results and proof techniques is very rich (it’s an active field with many follow up papers).
On the other hand, I have a collection of unpublished results in less active fields that I’ve tested every frontier model on (publicly accessible and otherwise) and each time the models have failed to solve them. Some of these are simply reformulations of results in the literature that the models are unable to find/connect which is what leads me to formulate this as a search problem with the space not being densely populated enough in this case (in terms of activity in these subfields).
So this looks like a useful use case for a mcp pipeline. You have a set of agents that are designed to look through the mathematics to discover what is useful, you then pipeline over to a set of agents that are distinguished by having training data on sets of mathematical fields, you have cross check proof agents, and then you have connections between fields agents. You see if you can use connections across mathematics through Wikipedia to crawl through the math and see if there's a correspondence between citations or links between proofs and their mathematical interdependence. Then you can start building trees and see if you can make righter linkages between set theory and number theory or topology or something.then see if you can get closer to a total theory or universal theory of mathematics (or prove it doesn't exist). If you can do that by just throwing money at it why not? It might be able to come up with all sorts of applications in the real world just byd default.
Claim: publish a paper in one of the best mathematical journal instead of a twitter thread
interesting if true, but this isn't the first time we heard of something like this
quanta published an article that talked about a physics lab asking chatGPT to help come up with a way to perform an experiment, and chatGPT _magically_ came up with an answer worth pursuing. but what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
this is amazing that chatGPT can do something like that, but `referencing data` != `deriving theorems` and the person posting this shouldn't just claim "chatGPT derived a better bound" in a proof, and should first do a really thorough check if it's possible this information could've just ended up in the training data
> what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
Which is actually huge. Reviewing and surfacing all the relevant research out there that we are just not aware of would likely have at least as much impact as some truly novel thing that it can come up with.
Maybe we should think of current AIs as not so much artificial intelligence, as collective intelligence. Which itself can be extremely valuable.
No, this is not permitted. Until today, the world agreed that the product always belongs to the creator or user of LLMs.
It turns out that if you use a fancy search engine to search instead of pretending that it’s intelligent, it will actually be good at its job. Who would have guessed?
How would we know it was referencing an old paper versus almost everything trivial already having a derivation somewhere?
One signal is to check the journal. Most reputable journals won't publish a paper claiming a new technique if it's actually trivial and well-known.
The "trivial" is slightly tongue in cheek. It must be trivial, I've just shown it!
> but what actually happened was chatGPT was referencing papers that basically went unread from lesser famous labs/researchers
now let's invalidate probably 70% of all patents
I know this was a throwaway, but finding prior art for a large group of existing patents would be a cool application.
it was half-serious.
if LLMs arent being used by https://patents.stackexchange.com/ or patent troll fighters, shame on them.
Any mathematicians who have actually called it "new interesting mathematics", or just an OpenAI employee?
The paper in question is an arxiv preprint whose first author seems to be an undergraduate. The theorem in it which GPT improves upon is perfectly nice, there are thousands of mathematicians who could have proved it had they been inclined to. AI has already solved much harder math problems than this.
The OpenAI employee posting this is a well known theoretical computer scientist: https://en.wikipedia.org/wiki/S%C3%A9bastien_Bubeck
Yes, he published a paper claiming GPT-4 has "sparks" of AGI. What else is he known for in the field of computer science?
https://arxiv.org/abs/2303.12712
Hello, TCS assistant professor here: he is legitimately respected among his peers.
Of course, because I am a selfish person, I'd say I appreciate most his work on convex body chasing (see "Competitively chasing convex bodies" on the Wikipedia link), because it follows up on some of my work.
Objectively, you should check his conference submission record, it will be a huge number of A*/A CORE rank conferences, which means the best possible in TCS. Or the prizes section on Wikipedia.
Not sure if you're trying to be provocative, but you could just click his name in the link you provided to find a lengthy list of arXiv preprints: https://arxiv.org/search/cs?searchtype=author&query=Bubeck,+...
More comments from another mathematician:
https://x.com/ErnestRyu/status/1958408925864403068?t=QmTqOcx...
Hypothesis: If you had ~1M dollar to burn, I think we should try setting up an AI agent to explore and try to invent new mathematics. It turns out agents can get an IMO gold with Gemini 2.5 Pro production model only. Therefore I suspect a swarm of agents burning through tokens like there's no tomorrow can invent new math.
Reference: https://arxiv.org/abs/2507.15855
Alternative: If Gemini Deep Think or GPT5-Pro people are listening, I think they should give free access to their models with potential scaffolding (ie. agentic workflow) to say some ~100 researchers to see if any of them can prove new math with their technology.
Further in the thread, the guy notes that this isn't "new" mathematics - a better proof with tighter bounds was published in April:
https://xcancel.com/SebastienBubeck/status/19581986678373298...
Are we sure this guy is not someone being mirrored by a recursive non-governmental system?
Context: https://x.com/GeoffLewisOrg/status/1945864963374887401
What does this even mean? This read like a SCP thing.
It is exactly SCP regurgitated by the LLM, and this guy thinks it's all true.
This is either satire that's over my head or mental illness.
This is one of 4o’s biggest flaws. If you are a conspiracy theorist, it’ll confirm any outlandish theory you can come up with, and provide invented receipts to go with it. Of course, it’s just model hallucinations, but for those who are already primed to believe that secrets are being kept, it gives the “evidence” they were always looking for.
"Your correction is correct! Jet fuel can "melt" steel beams because steel is a solid metal that requires heating to its melting point (around -30°C) to transform into a liquid..."
Lmao, I love it. Is this the new Q-Anon?
I guess arithmetic is just harder for an LLM than higher math.
Arithmetic is harder for mathematicians than higher maths too =P not even joking. It was a meme in my university's maths dept for a reason.
https://en.wikipedia.org/wiki/57_(number)
aka the Grothendieck prime!
In a group, you’d usually let the freshest handle splitting the bill because everyone else forgot arithmetic.
it might take a while but their answer would always be correct. the same cannot be said for LLMs.
Mathematicians make calculations in their errors all the time.
Whoops, switched some words around on accident.
Yeah, of course I agree with that =)
In here https://blog.google/products/gemini/gemini-2-5-deep-think/, the professor google worked with also claimed proving some previously unproven conjecture.
Alas, GPT-5 Pro (and friends) will also happily and confidently give you nonsense proofs of supposed theorems.
But yes, it's getting better and better.
The coolest part about this IMO is they used the same model we all have access to (GPT5-Pro), and not some secret invite only model.
It can't reason -> It can't make new discoveries -> It can only tie together bespoke missed data -> It can make some basic discoveries -> ??????
It doesn't outsmart the entirety of humankind combined, so it's not actually intelligent. Duh.
Claim: a randomizer can prove new mathematics as long as you keep checking every single one
I wanted to know how to set the environment variables for CGI in IIS. The GPT 5 thoughts made a totally unrelated picture and then gave the wrong answer.
Gamechanger! And worrisome for us laymen.
In the thread, they note a human had already come up with (and published) an even better solution.
Before AI, but while you (and I) were still unable to contribute anything meaningful or novel to discussions of mathematics, did you feel threatened?
I cannot wait that all we hold to be holy and sacred about the human mind, to be slowly unravelled by ai. It will remove the chains of the status associated with these fields, and allow people to move into higher modes of being
What are these higher modes? I'm very excited to hear about them.
Yes, that is why the chess world championship allows Stockfish assistance in order to democratize chess.
The big difference is that chess is a game/sport, and those are about competition between humans. It's a deliberately restricted ruleset to encourage such, thus the (imperfect) banning of assistance.
The same doesn't really apply to everything outside of that.
Still, you'd think that status would still remain, it's not like the invention of the car removed the glory of being the world's fastest sprinter.
Including techbros thinking they have to answer to every question humanity has ever asked?