Lean was a gamechanger for me as someone who has a "hobby" level interest in abstract mathematics. I don't have the formal education that would have cultivated the practice and repetition needed to just know on a gut level the kinds of formal manipulations needed for precise and accurate proofs. but lean (combined with its incredibly well designed abbreviation expansion) gives probably the most intuitive way to manipulate formal mathematical expressions that you could hope to achieve with a keyboard.
It provides tools for discovering relevant proofs, theorems, etc. Toying around with lean has actively taught me math that I didn't know before. The entire time it catches me any time I happen to fall into informal thinking and start making assumptions that aren't actually valid.
I don't know of any way to extract the abbreviation engine that lean plugins use in the relevant editors for use in other contexts, but man, I'd honestly love it if I could type \all or \ne to get access to all of the mathematical unicode characters trivially. Or even extend it to support other unicode characters that I might find useful to type.
Bessis [1] argues that formalism - or loosely math writing - is foundational to clarifying intuition/meaning in a way that natural language cannot. Imagine it as a scalpel carving out precise shapes from the blur of images we carry thereby allowing us to "see" things we otherwise cannot.
I am curious to try out lean to understand how definitions in lean are able to operationally capture meaning in an unambiguous manner.
For mathematics and certain fields, that is true. But the formalism matters, and as some have argued, the Fregean style that came to dominate in the 20th century is ill-suited for some fields, like linguistics. One argument is that linguists using this style inevitably recast natural language in the image of the formalism. (The traditional logical tradition is better suited, as its point of departure is the grammar of natural language itself.)
No formalism is ontologically neutral in the sense that there is always an implied ontology or range of possible ontologies. And it is always important to make a distinction between the abstractions proper to the formalism and the object of study. A common fallacy involves reifying those abstractions into objects of the theory, at least implicitly.
I just had a similar discussion with a coworker, he was advocating that LLMs are practically useful, but I argued they are kinda bad because nobody knows how they really work. I think it's somewhat return to pre-enlightenment situation where the expert authority was to be taken for their word, there was no way to externally verify their intuitive thought process, and I believe success of science and engineering is based on our formal understanding of the process and externalization of our thoughts.
Similar in mathematics, formalization was driven by this concern, so that we wouldn't rely on potentially wrong intuition.
I am now in favor of formalizing all serious human discourse (probably in some form of rich fuzzy and modal logic). I understand the concern for definition, but in communication, it's better to agree on the definition (which could be fuzzy) rather than use two random definitions and hope for their match. (I am reminded of koan about Sussman and Minsky http://www.catb.org/jargon/html/koans.html)
For example, we could formally define an airplane as a machine that usually has wings, usually flies. This would be translated into a formula in fuzzy logic which would take, for a given object, our belief this object is a machine, has wings and flies, and would return how much it is an airplane under some notion of usually.
I freely admit this approach wouldn't work for dadaist literary writers, but I don't want lawyers or politicians or scientists to be that.
Formalism isn't the right tool for a lot of semi-factual fields like journalism or law. Even in business, numbers are of course used in accounting, but much of it depends on arbitrary definitions and estimates. (Consider depreciation.)
The people that make theorem provers, because they are type theorists and not set theorists doing ZFC derivatives, are very aware of your last point. Painfully aware, from years of people dismissing their work.
Read Andrej Bauer on them many foundations of math, for example. Clearly he is a believer in "no one true ontology".
> The people that make theorem provers [...] are very aware of your last point.
> Clearly he is a believer in "no one true ontology".
My point wasn't that you should aim for some kind of fictitious absence of ontological commitments, only that whatever language you use will have ontological commitments. Even the type judgement e:t has ontological implications, i.e., for the term e to be of type t presupposes that the world is such that this judgement is possible.
You can still operate under Fregean/Russellian presuppositions without sets. For example, consider the problem of bare particulars or the modeling of predicates on relations.
Indeed, and e:t in type theory is quite a strong ontological commitment, it implies that the mathematical universe is necessarily subdivided into static types. My abstraction logic [1] has no such commitments, it doesn't even presuppose any abstractions. Pretty much the only requirement is that there are at least two distinct mathematical objects.
I mean, if you understand leans system then you understand the formal manipulation needed for precise and accurate proofs. Most mathematical papers are rather handwavy about things and expect people to fill in the formalism, which is not always true, as we have seen
I've been excited about Lean for years, not because of correctness guarantees, but because it opens the door to doing maths using software development methods.
Libraries of theorems and mathematical objects, with well defined abstractions that are ergonomic to apply in target use cases. Accompanied by good documentation, focused less on how the theorems are proven (how the functions are implemented), and more on what to use them for and how. With proper version control and package management.
I believe that all these practices could vastly improve collaboration and research velocity in maths, as much or more than AI, although they are highly complementary. If maths is coding, AI will be much better at it, and AI will be more applicable to it.
Terence Tao is well known for being enthusiastic about Lean and AI and he regularly posts about his experiments.
He is also a serious research mathematician at the top of his game, considered by many one of the best mathematicians alive. This might be biased by the fact that he is such a good communicator, he is more visible than other similarly good mathematicians, but he is a Fields medallist all the same.
Kevin Buzzard has been the main mathematician involved with Lean
This is a recent talk where he discusses putting it together with LLMs (he's somewhat sceptical it'll be revolutionary for producing new mathematics any time soon)
I'm leaning a lot into AI + lean. It's a fantastic tool to find new proofs. The extremly rigid nature of lean means you can really check programs for correctness. So that part of AI is solved. The only thing that remains is generating proofs, and that is where there's nothing in AI space right now. As soon as we do get something, our mathematical knowledge is going to explode.
As a a hobbyist mathematician / type theorist, chatgpt et al are great at 'looking up' theorems that you want to exist but that you may not have read about yet. It's also good at connecting disparate areas of math. I don't think lean subsumes AI. Rather, lean allows you to check the AI proof. ChatGPT genuinely does have a knack for certain lines of thought.
LLMs and Lean are orthogonal, neither subsumes either.
They both can be useful or harmful, do to their respective strengths and trade offs.
PAC/statistical learning is good at needles in the haystack problems assuming that the tail losses, simplicity bias, and corpus representation issues are acceptable and you understand that it is fundamentally existential quantification and control for automation bias etc…
Lean is a wonderful collection of concepts and heuristics but due to Rice and Gödel etc… will not solve all problems with software development.
How Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.
It is horses for courses, and remember that even in sub-TC total functional programming, proving and arbitrary functions is very hard, while constructing one is far more tractable.
Even those proofs don’t demonstrate semantic correctness. History is riddled with examples of people using powerful tools that elegantly explain flawed beliefs.
The 2009 crash and gaussian copula as an example.
Get all the value you can out of these tools, but use caution, especially in math, where superficially similar similarities often have conflicting conventions, constraints, and assumptions.
Obviously if you problem is ergotic with the Markov property, both will help, but Automated theorem proving and PAC learning will never be a meta theory of the other IMHO.
> How Gödel’s second incompleteness theorem shows that you can prove anything,
That is not at all what it says.
> They both can be useful or harmful,
If a proof is admitted into lean, there is no doubt as to its truth. There is no way in which lean can be construed as harmful.
> The 2009 crash and gaussian copula as an example.
There is nothing mathematical about the economics behind the 2009 crash. Such things are statistical measurements, which admit the possibility of failure, not mathematical conclusions that are demonstrably true.
I have proven quite a few theorems in Lean (and other provers) in my life, and the unfortunate reality is that for any non-trivial math, I still have to figure out the proof on paper first, and can only then write it in Lean. When I try to figure out the proof in Lean, I always get bogged down in details and loose sight of the bigger picture. Maybe better tactics will help. I'm not sure.
I love the analogy in David Bessis's wonderful book Mathematica (nothing to do with Wolfram). We all know how to tie our shoes. Now, write in words and symbols to teach someone how you tie your shoes. This is what a proof is.
Often even people with STEM degrees confuse what mathematicians do with the visible product of it - symbols and words on a page. While the formalism of mathematics has immense value for precision, and provides a "serialization language" (to borrow a CS analogy), it would be akin to confusing a Toaster with the Toaster manual, or shoelaces with the instructions.
Say I'm wanting to formalize a proof. How do I know that what I'm writing is actually a correct formulation?
If it gets more complicated, this problem gets worse. How do I know the thing it is checking is actually what I thought it was supposed to check?
I guess this is a bit like when you write a program and you want to know if it's correct, so you write some tests. But often you realize your tests don't check what you thought.
You don't know. Even with the best theorem provers, your definitions are still trusted. The best way I've found to help with this is to keep your definitions simple, and try to use them to do things (e.g. can you use your definition to solve other problems, does it work on some concrete examples, etc).
I want to respond to each of his points one by one
> powering various math tools
I don't think going through a math proof like they were computer programs is a good way to approach mathematics. In mathematics I think the important thing is developing a good intuition and mental model of the material. It's not a huge problem if the proof isn't 100% complete or correct if the general approach is good. Unlike programming, where you need a program to work 99.9% of the time, you have to pay close attention to all the minute details.
> analyzing meta-math trends
I'm highly skeptical of the usefulness of this approach in identifying non-trivial trends. In mathematics the same kinds of principles can appear in many different forms, and you won't necessarily use the same language or cite the same theorems even though the parallels are clear to those who understand them. Perhaps LLMs with their impressive reasoning abilities can identify parallels but I doubt a simple program would yield useful insights.
> Basically, the process of doing math will become more efficient and hopefully more pleasant.
I don't see how his points make things more efficient. It seems like it's adding a bunch more work. It definitely doesn't sound more pleasant.
I know nothing of mathematics but found it fascinating, especially the idea that if outside information changes that affects your proof, you can have the Lean compiler figure out which lines of your proof need updating (instead of having to go over every line, which can take days or more).
As a former professional mathematician: the benefits mentioned in the article (click-through definitions and statements, analyzing meta trends, version control, ...) do not seem particularly valuable.
The reason to formalize mathematics is to automate mathematical proofs and the production of mathematical theory.
Is there, somewhere, a list of theorems that were considered proved and true for a while, but after attempts at formalization the proof was invalidated and the theorem is now unknown or disproved?
I’m not a mathematician, so could someone explain the difference in usage between Lean and Coq?
On a surface level my understanding is that both are computer augmented ways to formalize mathematics. Why use one over the other? Why was Lean developed when Coq already existed?
I think that (most) mathematicians were not that interested in formal proof until quite recently (as opposed to computer scientists), and most of the interest in lean has been self-reinforcing, namely there is a (relatively speaking) huge library of formally verified mathematics. So now basically anyone who cares about formal verification as a tool for mathematics is working in lean. There are of course numerous techincal differences which you can read about if you google coq vs lean.
Lean has a good library of formalized mathematics, but lacks code extraction (you cannot generate a program from the proofs it constructs). So it is more suitable and highly used by mathematicians to prove theorems.
Coq has always focused on proving program correctness, so it sees lots of use by computer scientists. It also does code extraction, so after you prove a program correct in Coq you can generate a fast version of that program without the proof overhead.
I think the analogy between JavaScript and TypeScript is not 100% because although JavaScript has some quirks in its design, it is fully consistent. My biggest issue with math is symbols that are reused to mean different things in different contexts. It makes maths more time-consuming to learn and makes it difficult to jump between different fields.
Personally, at times, I struggled with the dual nature of mathematics; its extreme precision in meaning combined with vague and inconsistent use of symbols is challenging... Especially frustrating when learning something new and some symbols that you think you understand turn out to mean something else; it creates distrust towards maths itself.
Another reason to formalize math is that formalized proofs become training material for automated mathematics.
Ultimately we want all of the math literature to become training material, but that would likely require automated techniques for converting it to formalized proofs. This would be a back-and-forth thing that would build on itself.
Much of the argument is the same as for the initial push to formalize mathematics in the late 19th century. Formalisms allow for precision and help reduce errors, but the most important change was in how mathematicians were able to communicate, by creating a shared understanding.
Computerized mathematics is just another step in that direction.
Imho it was always "computerized", they just didn't have a computer. To me the approaches used in the early 20th century look like people defining a simple VM then writing programs that "execute" on that VM.
> Imho it was always "computerized", they just didn't have a computer.
They had a whole lot of computers, actually. But back then the "computers" were actual people whose job was to do computations with pen and paper (and a few very primitive machines).
> While Paulson focuses on the obvious benefit of finding potential errors in proofs as they are checked by a computer, I will discuss some other less obvious benefits of shifting to formal math or “doing math with computers”
Lean was a gamechanger for me as someone who has a "hobby" level interest in abstract mathematics. I don't have the formal education that would have cultivated the practice and repetition needed to just know on a gut level the kinds of formal manipulations needed for precise and accurate proofs. but lean (combined with its incredibly well designed abbreviation expansion) gives probably the most intuitive way to manipulate formal mathematical expressions that you could hope to achieve with a keyboard.
It provides tools for discovering relevant proofs, theorems, etc. Toying around with lean has actively taught me math that I didn't know before. The entire time it catches me any time I happen to fall into informal thinking and start making assumptions that aren't actually valid.
I don't know of any way to extract the abbreviation engine that lean plugins use in the relevant editors for use in other contexts, but man, I'd honestly love it if I could type \all or \ne to get access to all of the mathematical unicode characters trivially. Or even extend it to support other unicode characters that I might find useful to type.
Bessis [1] argues that formalism - or loosely math writing - is foundational to clarifying intuition/meaning in a way that natural language cannot. Imagine it as a scalpel carving out precise shapes from the blur of images we carry thereby allowing us to "see" things we otherwise cannot.
I am curious to try out lean to understand how definitions in lean are able to operationally capture meaning in an unambiguous manner.
[1] https://www.amazon.com/Mathematica-Secret-World-Intuition-Cu...
For mathematics and certain fields, that is true. But the formalism matters, and as some have argued, the Fregean style that came to dominate in the 20th century is ill-suited for some fields, like linguistics. One argument is that linguists using this style inevitably recast natural language in the image of the formalism. (The traditional logical tradition is better suited, as its point of departure is the grammar of natural language itself.)
No formalism is ontologically neutral in the sense that there is always an implied ontology or range of possible ontologies. And it is always important to make a distinction between the abstractions proper to the formalism and the object of study. A common fallacy involves reifying those abstractions into objects of the theory, at least implicitly.
I just had a similar discussion with a coworker, he was advocating that LLMs are practically useful, but I argued they are kinda bad because nobody knows how they really work. I think it's somewhat return to pre-enlightenment situation where the expert authority was to be taken for their word, there was no way to externally verify their intuitive thought process, and I believe success of science and engineering is based on our formal understanding of the process and externalization of our thoughts.
Similar in mathematics, formalization was driven by this concern, so that we wouldn't rely on potentially wrong intuition.
I am now in favor of formalizing all serious human discourse (probably in some form of rich fuzzy and modal logic). I understand the concern for definition, but in communication, it's better to agree on the definition (which could be fuzzy) rather than use two random definitions and hope for their match. (I am reminded of koan about Sussman and Minsky http://www.catb.org/jargon/html/koans.html)
For example, we could formally define an airplane as a machine that usually has wings, usually flies. This would be translated into a formula in fuzzy logic which would take, for a given object, our belief this object is a machine, has wings and flies, and would return how much it is an airplane under some notion of usually.
I freely admit this approach wouldn't work for dadaist literary writers, but I don't want lawyers or politicians or scientists to be that.
The project to formalize everything has been tried before and abandoned. Some issues:
https://metarationality.com/sort-of-truth
Formalism isn't the right tool for a lot of semi-factual fields like journalism or law. Even in business, numbers are of course used in accounting, but much of it depends on arbitrary definitions and estimates. (Consider depreciation.)
The people that make theorem provers, because they are type theorists and not set theorists doing ZFC derivatives, are very aware of your last point. Painfully aware, from years of people dismissing their work.
Read Andrej Bauer on them many foundations of math, for example. Clearly he is a believer in "no one true ontology".
> The people that make theorem provers [...] are very aware of your last point.
> Clearly he is a believer in "no one true ontology".
My point wasn't that you should aim for some kind of fictitious absence of ontological commitments, only that whatever language you use will have ontological commitments. Even the type judgement e:t has ontological implications, i.e., for the term e to be of type t presupposes that the world is such that this judgement is possible.
You can still operate under Fregean/Russellian presuppositions without sets. For example, consider the problem of bare particulars or the modeling of predicates on relations.
Indeed, and e:t in type theory is quite a strong ontological commitment, it implies that the mathematical universe is necessarily subdivided into static types. My abstraction logic [1] has no such commitments, it doesn't even presuppose any abstractions. Pretty much the only requirement is that there are at least two distinct mathematical objects.
[1] http://abstractionlogic.com
I mean, if you understand leans system then you understand the formal manipulation needed for precise and accurate proofs. Most mathematical papers are rather handwavy about things and expect people to fill in the formalism, which is not always true, as we have seen
I've been excited about Lean for years, not because of correctness guarantees, but because it opens the door to doing maths using software development methods.
Libraries of theorems and mathematical objects, with well defined abstractions that are ergonomic to apply in target use cases. Accompanied by good documentation, focused less on how the theorems are proven (how the functions are implemented), and more on what to use them for and how. With proper version control and package management.
I believe that all these practices could vastly improve collaboration and research velocity in maths, as much or more than AI, although they are highly complementary. If maths is coding, AI will be much better at it, and AI will be more applicable to it.
Out of curiosity, does anyone know the mathematicians actively leaning into AI + Lean?
Terence Tao is well known for being enthusiastic about Lean and AI and he regularly posts about his experiments.
He is also a serious research mathematician at the top of his game, considered by many one of the best mathematicians alive. This might be biased by the fact that he is such a good communicator, he is more visible than other similarly good mathematicians, but he is a Fields medallist all the same.
Kevin Buzzard has been the main mathematician involved with Lean
This is a recent talk where he discusses putting it together with LLMs (he's somewhat sceptical it'll be revolutionary for producing new mathematics any time soon)
https://www.youtube.com/watch?v=K5w7VS2sxD0
I'm leaning a lot into AI + lean. It's a fantastic tool to find new proofs. The extremly rigid nature of lean means you can really check programs for correctness. So that part of AI is solved. The only thing that remains is generating proofs, and that is where there's nothing in AI space right now. As soon as we do get something, our mathematical knowledge is going to explode.
Terence Tao posts on mathstodon fairly regularly about lean, AI, and math. I'm not going to interpret his posts.
As a a hobbyist mathematician / type theorist, chatgpt et al are great at 'looking up' theorems that you want to exist but that you may not have read about yet. It's also good at connecting disparate areas of math. I don't think lean subsumes AI. Rather, lean allows you to check the AI proof. ChatGPT genuinely does have a knack for certain lines of thought.
LLMs and Lean are orthogonal, neither subsumes either.
They both can be useful or harmful, do to their respective strengths and trade offs.
PAC/statistical learning is good at needles in the haystack problems assuming that the tail losses, simplicity bias, and corpus representation issues are acceptable and you understand that it is fundamentally existential quantification and control for automation bias etc…
Lean is a wonderful collection of concepts and heuristics but due to Rice and Gödel etc… will not solve all problems with software development.
How Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.
It is horses for courses, and remember that even in sub-TC total functional programming, proving and arbitrary functions is very hard, while constructing one is far more tractable.
Even those proofs don’t demonstrate semantic correctness. History is riddled with examples of people using powerful tools that elegantly explain flawed beliefs.
The 2009 crash and gaussian copula as an example.
Get all the value you can out of these tools, but use caution, especially in math, where superficially similar similarities often have conflicting conventions, constraints, and assumptions.
Obviously if you problem is ergotic with the Markov property, both will help, but Automated theorem proving and PAC learning will never be a meta theory of the other IMHO.
> How Gödel’s second incompleteness theorem shows that you can prove anything,
That is not at all what it says.
> They both can be useful or harmful,
If a proof is admitted into lean, there is no doubt as to its truth. There is no way in which lean can be construed as harmful.
> The 2009 crash and gaussian copula as an example.
There is nothing mathematical about the economics behind the 2009 crash. Such things are statistical measurements, which admit the possibility of failure, not mathematical conclusions that are demonstrably true.
> Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.
What has Gödel incompleteness to do with that? We can just take any sentence φ as an axiom, and we’ve a trivial proof thereof.
I have proven quite a few theorems in Lean (and other provers) in my life, and the unfortunate reality is that for any non-trivial math, I still have to figure out the proof on paper first, and can only then write it in Lean. When I try to figure out the proof in Lean, I always get bogged down in details and loose sight of the bigger picture. Maybe better tactics will help. I'm not sure.
It's good to remind yourself of Bill Thurston's points: https://www.ams.org/journals/bull/1994-30-02/S0273-0979-1994...
I love the analogy in David Bessis's wonderful book Mathematica (nothing to do with Wolfram). We all know how to tie our shoes. Now, write in words and symbols to teach someone how you tie your shoes. This is what a proof is.
Often even people with STEM degrees confuse what mathematicians do with the visible product of it - symbols and words on a page. While the formalism of mathematics has immense value for precision, and provides a "serialization language" (to borrow a CS analogy), it would be akin to confusing a Toaster with the Toaster manual, or shoelaces with the instructions.
Noob question here.
Say I'm wanting to formalize a proof. How do I know that what I'm writing is actually a correct formulation?
If it gets more complicated, this problem gets worse. How do I know the thing it is checking is actually what I thought it was supposed to check?
I guess this is a bit like when you write a program and you want to know if it's correct, so you write some tests. But often you realize your tests don't check what you thought.
You don't know. Even with the best theorem provers, your definitions are still trusted. The best way I've found to help with this is to keep your definitions simple, and try to use them to do things (e.g. can you use your definition to solve other problems, does it work on some concrete examples, etc).
Being sure that you are proving the right thing is something that can never be formally guaranteed.
I want to respond to each of his points one by one
> powering various math tools
I don't think going through a math proof like they were computer programs is a good way to approach mathematics. In mathematics I think the important thing is developing a good intuition and mental model of the material. It's not a huge problem if the proof isn't 100% complete or correct if the general approach is good. Unlike programming, where you need a program to work 99.9% of the time, you have to pay close attention to all the minute details.
> analyzing meta-math trends
I'm highly skeptical of the usefulness of this approach in identifying non-trivial trends. In mathematics the same kinds of principles can appear in many different forms, and you won't necessarily use the same language or cite the same theorems even though the parallels are clear to those who understand them. Perhaps LLMs with their impressive reasoning abilities can identify parallels but I doubt a simple program would yield useful insights.
> Basically, the process of doing math will become more efficient and hopefully more pleasant.
I don't see how his points make things more efficient. It seems like it's adding a bunch more work. It definitely doesn't sound more pleasant.
Related: Terrence Tao discussing Lean (programming language for formalising mathematical proofs) on Lex podcast (starts 1h20m): https://www.youtube.com/watch?v=HUkBz-cdB-k&t=1h20m10s
I know nothing of mathematics but found it fascinating, especially the idea that if outside information changes that affects your proof, you can have the Lean compiler figure out which lines of your proof need updating (instead of having to go over every line, which can take days or more).
As a former professional mathematician: the benefits mentioned in the article (click-through definitions and statements, analyzing meta trends, version control, ...) do not seem particularly valuable.
The reason to formalize mathematics is to automate mathematical proofs and the production of mathematical theory.
Is there, somewhere, a list of theorems that were considered proved and true for a while, but after attempts at formalization the proof was invalidated and the theorem is now unknown or disproved?
https://en.wikipedia.org/wiki/List_of_incomplete_proofs?wpro...
Ask HN: What’s the single best resource for learning Lean (beyond the official docs)?
Assuming you have some math background but no Lean background: https://adam.math.hhu.de/#/g/leanprover-community/nng4
I’m not a mathematician, so could someone explain the difference in usage between Lean and Coq? On a surface level my understanding is that both are computer augmented ways to formalize mathematics. Why use one over the other? Why was Lean developed when Coq already existed?
I think that (most) mathematicians were not that interested in formal proof until quite recently (as opposed to computer scientists), and most of the interest in lean has been self-reinforcing, namely there is a (relatively speaking) huge library of formally verified mathematics. So now basically anyone who cares about formal verification as a tool for mathematics is working in lean. There are of course numerous techincal differences which you can read about if you google coq vs lean.
Rocq is ancient and has some longstanding UX problems. It is pleasant to try making a new code base.
This is kinda like asking, why write Clang when we already had GCC? Or, why making Python if we already have Perl?
It's good to have some competition for these things, Rocq I believe felt the heat and has been also doing some good things in recent years.
Why do you keep saying Rocq when he asked about Coq? Are they the same thing?
Lean has a good library of formalized mathematics, but lacks code extraction (you cannot generate a program from the proofs it constructs). So it is more suitable and highly used by mathematicians to prove theorems.
Coq has always focused on proving program correctness, so it sees lots of use by computer scientists. It also does code extraction, so after you prove a program correct in Coq you can generate a fast version of that program without the proof overhead.
Lean has much better UX to be frank. Rocq is fine, but if I were to start formalising today, I'd pick Lean.
I think the analogy between JavaScript and TypeScript is not 100% because although JavaScript has some quirks in its design, it is fully consistent. My biggest issue with math is symbols that are reused to mean different things in different contexts. It makes maths more time-consuming to learn and makes it difficult to jump between different fields.
Personally, at times, I struggled with the dual nature of mathematics; its extreme precision in meaning combined with vague and inconsistent use of symbols is challenging... Especially frustrating when learning something new and some symbols that you think you understand turn out to mean something else; it creates distrust towards maths itself.
Another reason to formalize math is that formalized proofs become training material for automated mathematics.
Ultimately we want all of the math literature to become training material, but that would likely require automated techniques for converting it to formalized proofs. This would be a back-and-forth thing that would build on itself.
Much of the argument is the same as for the initial push to formalize mathematics in the late 19th century. Formalisms allow for precision and help reduce errors, but the most important change was in how mathematicians were able to communicate, by creating a shared understanding.
Computerized mathematics is just another step in that direction.
Imho it was always "computerized", they just didn't have a computer. To me the approaches used in the early 20th century look like people defining a simple VM then writing programs that "execute" on that VM.
> Imho it was always "computerized", they just didn't have a computer.
They had a whole lot of computers, actually. But back then the "computers" were actual people whose job was to do computations with pen and paper (and a few very primitive machines).
Lean is amazing for collaboration because anyone can contribute to a proof and their work be automatically verified.
> While Paulson focuses on the obvious benefit of finding potential errors in proofs as they are checked by a computer, I will discuss some other less obvious benefits of shifting to formal math or “doing math with computers”
From https://news.ycombinator.com/item?id=44214804 sort of re: Tao's Real Analysis formalisms:
> So, Lean isn't proven with HoTT either.