I feel really lucky that Litex has drawn so much attention from you guys! I always like the geek culture of HN, and have absolutely no idea why such a random guy from a random background can rush into the top 10 on Hacker News.
Litex gets its name from Lisp and LaTeX. I want to make Litex as elegant and deep as Lisp, and at the same time as pragmatic as LaTeX.
Many people have raised questions and suggestions about Litex, and I’m truly grateful. Since I’m developing the Litex core on my own, a lot of the documentation is still incomplete — I’ll try my best to improve it soon! All of your suggestions are really helpful. Thank you so much!
I'm not sure why people don't spend two minutes looking up a quote before sharing it, feels like most people have zero care about quality or polish today, everything is half-assed.
Litex is a simple, intuitive, and open-source formal language for coding reasoning (Star the repo! https://github.com/litexlang/golitex). It ensures every step of your reasoning is correct, and is actually the first reasoning formal language (or formal language for short) that can be learned by anyone in 1–2 hours, even without math or programming background.
Making Litex intuitive to both human and AI is the mission of Litex. That is how Litex scales formal reasoning: making it accessible to more people, applicable to more complex problems, and usable by large-scale AI systems.
The comparision between Litex and Lean is on our website(https://litexlang.com). There is also a straightforward tutorial about it on our web that you do not want to miss.
Contact me if you are interested! Really hope we can scale formal reasoning in AI era together!
> Even Kids can formalize the multivariate equation in Litex in 2 minutes, while it [takes] an experienced expert hours of work in Lean 4.
Well, I propose an alternative proof in lean4:
import Mathlib.Tactic
example (x y : ℝ)
(h₁ : 2 * x + 3 * y = 10)
(h₂ : 4 * x + 5 * y = 14)
: x = -4 ∧ y = 6 := by
have hy : y = 6 := by
linear_combination 2 * h₁ - h₂
have hx : x = -4 := by
-- you'd think h₁ - 3 * hy would work, but it won't
linear_combination 1/2 * h₁ - 3/2 * hy
exact ⟨hx, hy⟩
---
One thing I like about the lean proof, as opposed to the litex proof, is that it specifies why the steps are correct. If litex's strategy is "you describe the steps you want to take, and litex will automatically figure out why they're correct", how are you supposed to do any nontrivial proofs?
> how are you supposed to do any nontrivial proofs?
One should also take a look at Isabelle/HOLs AFP here. You can get very far with Metis et al but it is very inefficient computationally.
Especially when proofs get larger and/or layer on abstractions (proving something nontrivial likely involves building on existing algorithms etc.) the ability to make proofs efficient to verify is important.
```
If litex's strategy is "you describe the steps you want to take, and litex will automatically figure out why they're correct", how are you supposed to do any nontrivial proofs?
```
HAHA, what i am saying here is, each statement you write, is automatically checked by Litex, using its known specific facts and universal facts. Since any math proof, no matter complex or simple, can be divided into many small steps and each small step can be verified in that way, I guess it is fair to say "automatically figure out why they're correct" to a non-mathematicl person when introducing Litex.
How does it escale for NxN system? Let's say N=10 that is the maximal amount I'd write by hand (in exchange for a lot of money).
I took a look at the example and it's very intuitive. I'm trying to guess which heuristics is it using. Does it try to find exact matches of whatever is inside the parenthesis?
@GP?: Doesn't Lean have a "please fill the blanks" mode for short proof?
> @GP?: Doesn't Lean have a "please fill the blanks" mode for short proof?
Yes, there are several depending on what you want to do. The catchall is `simp` (short for 'simplify').
Why do we need more than one "do what I mean" command? (Or more than zero "do what I say" commands?) Because it's impossible to know what you mean. If you try to use a lot of `simp`, you'll notice that you often have to tell it what to do.
Note that what we really want for this problem is a linear algebra package that can do row reduction, not a formal prover. This problem is all arithmetic.
Thank you thau! Your example is pretty interesting! I avoid using any advanved Mathlib tactic to make the comparison fairer. We are comparing Lean and Litex under conditions where they don’t rely too much on external packages, which makes the comparison a bit fairer. (since Lean does have a very rich set of libraries, but building libraries is itself a challenge. Litex really needs to learn from Lean on how to build a successful library!).)(afterall, Litex can also abstract all proofs here and give it a name linear_combination, right?)
> Your example is pretty interesting! I avoid using any advanved Mathlib tactic to make the comparison fairer.
I learned about the `linear_combination` tactic from your example. Other than that, I use `have` and `exact`, which (a) are not advanced, and (b) are also used in your example.
Before that, my first attempt at the lean proof looked like this:
example (x y : ℝ)
(h₁ : 2 * x + 3 * y = 10)
(h₂ : 4 * x + 5 * y = 14)
: x = -4 ∧ y = 6 := by
-- double h₁ to cancel the x term
have h₃ : 2 * (2 * x + 3 * y) = 2 * 10 := by rw [h₁]
conv at h₃ => ring_nf -- "ring normal form"
-- subtract h₂ from h₃
have h₄ : (x * 4 + y * 6) - (4 * x + 5 * y) = 20 - 14 := by rw [h₂, h₃]
conv at h₄ => ring_nf
conv at h₁ =>
-- substitute y = 6 into h₁
rw [h₄]
ring_nf
-- solve for x
have h₅ : ((18 + x * 2) - 18) / 2 = (10 - 18) / 2 := by rw [h₁]
conv at h₅ => ring_nf
apply And.intro h₅ h₄
> We are comparing Lean and Litex under conditions where they don’t rely too much on external packages
This proof does have the advantage of not needing to import Mathlib.Tactic. Although again, that's something your proof does.
Yeah, I don't think the authors _actually_ mean that. I think English isn't their first language.
We should try to be charitable (but with a healthy amount of skepticism!); it's possible they meant "Even a child [with a good understanding of Litex] could [mechanically] formalize this multivariate equation in Litex in 2 minutes [as opposed to remembering and writing Lean 4 syntax]"
Kids hardly know what a multivariate equation is. Unless you use "kid" to denote 20-year old college students enrolled in a math program which some people do.
The other claim is doubtful too:
> while it require an experienced expert hours of work in Lean 4.
No, it doesn't. If you have an actual expert, it only takes a few minutes.
And besides, isn't this exactly what an artificial intelligence would solve? Take some complex system and derive something from it. That's exactly what intelligence is about. But LLMs can't deal with the complex but very logical (by definition) and unambiguous system like Lean so we need to dumb it down.
Turns out, LLMs are not actually intelligent! We should stop calling them that. Unfortunately, there are too many folks in our industry following this hyped-up terminology. It's delusional.
Note that I'm not saying LLMs are useless. They are very useful for many applications. But they are not intelligent.
I studied 2x2 linear equation system in high school at 14 (13?) y.o. It was a technical school with more math and physics, and later specialization (like chemistry, electronics, building, ...). I think in a normal school they study that at 16 y.o.
We also teach 2x2 systems to 18 y.o. in the fists year of the university for architects, medics and other degree that don't need a huge amount on math. (Other degrees like engineering or physics get 4x4 or bigger systems that definitively need the Gauss method.)
Unfortunately, whenever you try to exclude LLMs from the holy church of intelligence based on what they can't do, you end up excluding a whole lot of humans too.
Even low-IQ humans can in principle learn how to use Lean to represent a multivariate system. It might take a while, but in principle their brain is capable of that feat. In contrast, no matter how long I sit down with ChatGPT or Gemini or whatnot, it won't be able to. Because they are not intelligent.
It's a great achievement of the AI hype that the burden of proof has been reversed. Here I am, having to defend my claim that they are not intelligent. But the burden of proof should be on those claiming intelligence. The claim that earth is a sphere was extraordinary and needed convincing evidence. The claim that species have evolved through evolution was. But the claim that LLMs are intelligent is so self-evident that rejecting the idea needs evidence? That's upside-down!
> Here I am, having to defend my claim that they are not intelligent.
It's because all you've done is made a claim without any evidence. Someone pointed out a challenge that most claims about them not being intelligent can't submit any evidence that can't also be met by an LLM.
But instead of submitting any evidence to support your claim, you descended into hyperbole about how hard done by you are being expected to support your claims.
In science, it's okay to say we don't know. The amount of disagreement - even amongst smart people - about if LLMs are intelligent or not, suggest to me that we just don't have universally accepted research and definitions that are tight enough to decisively say.
But you're talking not only like you _know_ the answer for sure, so much that you don't need to support it with evidence or credentials, because those who disagree are obviously just poor victims of the AI hype machine.
Please make sure you pass your knowledge of your LLM discoveries onto the scientific community, you could change the world!
Did you read my post? The claim is "LLMs are intelligent". And instead if requiring evidence for that, apparently most folks (including you) are fine with just accepting that claim and require evidence if somebody questions this. That's what I'm doing.
It's like a religion. "God exists" is the claim. Nobody needs to provide evidence that this is not the case. LLMs are intelligent" is the claim. Nobody needs to provide evidence against that. In either case, the burden of proof is with the one making the claim.
> In science, it's okay to say we don't know
But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".
No you're not questioning it, you're making statements against it, without evidence, which is just as useless as the statement without evidence.
Like I suggested, like the person you responded to suggested: when science tries to prove or disprove LLM intelligence it generally descends into disagreements about definition or evidence (neither of which you provided).
The reason why no evidence was provided for the original LLMs are intelligent claim is because - if you read through this thread - you were the first to make that claim, and the counter claim.
> But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".
I don't know what you want me to say here - if you want to continue acting like there's some widely accepted and agreed on definition of intelligence that everyone who isn't you is an idiot for not knowing, then carry on.
I don't have a reliable definition for intelligence. Like I said, if you do, please share your finding with science, you could settle some fairly big debates and change the world in a meaningful way.
1. LLMs are widely called "intelligent". Evidence for my claim: The term "artificial intelligence" that is used everywhere. It has its own TLD.
2. There is no evidence that this terminology is applicable. Questioning it faces some variant of "well do you have evidence to the contrary?". Evidence for my claim: This thread.
You are welcome to disprove my claims, as in the scientific spirit that you say you uphold.
the way it goes is that a device/program of some sort displays a broad number of behaviours that previously was believed to be a future sign of artificial intelligence (e.g. somewhat coherent text, Turing test, giving the impression of following instructions and arriving at the correct results, etc).
some people claim this is artificial intelligence of a lower quality than humans, and these people expect that such mechanisms will eventually match and then potentially surpass humans.
then there's another crowd coming along and claiming no, this isn't intelligence at all, for example it can't tie its shoelaces.
my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X, no matter how many years you might try to coach them. (for example, I will never be a musician/composer. I simply lack the gene.)
The retort is always "oh but in principle a human could do this". well, maybe next year's LLM will do it in practice, not just in principle, for all I know.
As they say, person who says it can't be done should not stop person doing it.
Heavier than air flight was once thought to be impossible. As long as you don't have a solid mathematical theorem that says only carbon replicators born from sexual intercourse can be intelligent, I expect some day silicon devices will do everything carbon creatures can do and more.
> my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X,
Indeed, the point you are making is reasonable. But I'm trying to say that the premise is wrong. Nobody should be expected to come up with a reason why it is not intelligence. We should expect to be presented with evidence that it is intelligence. Absent that, the null hypothesis is that it isn't, just like any other computer program before isn't, uncontroversially.
I'm sure you already got my point, apologies for repeating it, but some clarification to clearly carve out our points may not hurt.
>Nobody should be expected to come up with a reason why it is not intelligence.
I'm not asking anybody to come up with a reason why it's not intelligence. I'm telling people they're wrong when they do try to justify calling it not intelligent. if you want to gatekeep a word, you should at least try to define it and then stick to the definition.
It is intelligence in the sense that if somebody had described it in 2010, we would have said yes, that's intelligence and it's hundreds of years away. It isn't intelligence in the sense that it's now here and we've found holes in the story.
Intelligence is so poorly defined that it's an ever receding finish line that somehow we're supposed to cross before we can call the device intelligent.
As Dennett said, it's like magic. Magic that is possible is just tricks. Real magic is that which is impossible.
Quite flawed, but inspired. This stuff popping up is interesting. I guess it is due to Lean reaching people that would not be aware of formal reasoning on a computer before.
Thank you auggierose. Your comment is by far the best description of the stage of Litex is now: very flawed, but very different from other formal languages. I guess it is because Litex is closer to reasoning (or math in general) rather than to programming.
Taking that as the definition, this definitely is not the first formal language learnable in 1-2 hours. I would think, for example, that the language consisting of just the empty string is older and learnable in 1-2 hours.
They probably mean something like “formal language used for writing mathematical proofs that is (about) as powerful as Lean”, though.
You can write "formal" proofs in this language for mathematical theorems. They are "formal" because they are so detailed that they are machine checkable. That's in contrast to the "informal" pen and paper proofs that people normally produce.
Besides pure maths you can also use that to verify the correctness of software. E.g. say you implemented a shortest path algorithm:
shortestPath(graph, start, end)
You could proof something like:
For all `graph` and each `path` in `graph` from `start` to `end`:
know forall x N: x >= 47 => x >= 17
let x N: x = 47
x >= 17
How does that assumption in the first line have any effect? Surely the underlying theory of naturals should be enough to derive 47 >= 17 ?
And in general I am very skeptical of the claim that Litex "can be learned by anyone in 1–2 hours". Even just the difference between `have`/`let`/`know` would take a while to master. The syntax for functions is not at all intuitive (and understandably so!). & so on. The trivial subset of the language used in the README may be easy to learn but a) it would not get you very far b) most likely every other related toolbox (Lean, HOL, etc) has a similar "trivial" fragment.
But, always good to see effort in this problem space!
The first line is essential, because Litex does not implement transitivity of >= in its kernel and one has to formalize it:
know @larger_equal_is_transitive(x, y, z R):
x >= y
y >= z
Thank you for clarifying, but don't you think this puts a rather big dent in the claim that Litex is "intuitive" and "can be learned by anyone in 1–2 hours"? I think the average user would expect the natural/real numbers to come equipped with this kind of theorems.
For instance, the tutorial says that "The daily properties" (whatever this means) of "+, -, , /, ^, %" are "already in the Litex kernel". What about associativity of and +, or distribution of * over +? Are these part of the "daily properties"? And if so, why didn't transitivity of >= not make the cut?
Just trying to understand the design choices here, this is very interesting.
currently reading through the tutorial. don't have much experience with coq, lean and friends, but this looks like a nice language to get started with formal proofs.
It is more likely that the author is not a native english speaker (project author seems to be chinese) and may have used some tool to help with the translation.
I have also been accused of being a Markhov chain, before 2022. I communicate in English only for work and social media so writing may sometimes seem strange to native speakers.
>Litex(website) is a simple, intuitive, and open-source formal language for coding reasoning (Star the repo!). It ensures every step of your reasoning is correct, and is actually the first reasoning formal language (or formal language for short) that can be learned by anyone in 1–2 hours, even without math or programming background.
>Making Litex intuitive to both humans and AI is Litex's core mission. This is how Litex scales formal reasoning: by making it accessible to more people, applicable to more complex problems, and usable by large-scale AI systems. These benefits stem from Litex's potential to lower the entrance barrier by 10x and reduce the cost of constructing formalized proofs by 10x, making formal reasoning as natural as writing.
>Even Kids can formalize the multivariate equation in Litex in 2 minutes, while it require an experienced expert hours of work in Lean 4. It is a typical example of how Litex lowers the entrance barrier by 10x, lowers the cost of constructing formalized proofs by 10x, making formalization as easy and fast as natural writing. No foreign keywords, no twisted syntax, or complex semantics. Just plain reasoning.
Hi there! I am jiachen shen, creator of Litex.
I feel really lucky that Litex has drawn so much attention from you guys! I always like the geek culture of HN, and have absolutely no idea why such a random guy from a random background can rush into the top 10 on Hacker News.
Litex gets its name from Lisp and LaTeX. I want to make Litex as elegant and deep as Lisp, and at the same time as pragmatic as LaTeX.
Many people have raised questions and suggestions about Litex, and I’m truly grateful. Since I’m developing the Litex core on my own, a lot of the documentation is still incomplete — I’ll try my best to improve it soon! All of your suggestions are really helpful. Thank you so much!
Lisp is pretty practical even though not many people use it anymore.
"Never believe quote attributions given on the internet" - Abraham Lincoln
The quote from the README seems indeed to be falsely attributed to da Vinci. The quote in question:
> Simplicity is the ultimate sophistication. - Leonardo da Vinci
https://checkyourfact.com/2019/07/19/fact-check-leonardo-da-...
https://quoteinvestigator.com/2015/04/02/simple/
I'm not sure why people don't spend two minutes looking up a quote before sharing it, feels like most people have zero care about quality or polish today, everything is half-assed.
Thank you captain! Your observation is pretty interesting! I will fix that after I have more information!
“Arrrg! I’ve seen that quote everywhere! I must put a stop to this.” - John Wilkes Booth
“I believe John Wilkes Booth to be innocent!” - Alexander Hamilton.
LiteX has been a digital SOC IP library for man years.
https://github.com/enjoy-digital/litex
[flagged]
That’s a lot of words and anger for such a small thing.
It's a low brow dismissal of someone's work? How else should I react?
this might shed some light on what's wrong: https://0x0.st/KB-b.txt
where can i generate that analysis?
> Hacker News users don’t see you...
Lolol I sure hope no one hn sees me as anything (otherwise it's past due for me to dump this handle).
Litex is a simple, intuitive, and open-source formal language for coding reasoning (Star the repo! https://github.com/litexlang/golitex). It ensures every step of your reasoning is correct, and is actually the first reasoning formal language (or formal language for short) that can be learned by anyone in 1–2 hours, even without math or programming background.
Making Litex intuitive to both human and AI is the mission of Litex. That is how Litex scales formal reasoning: making it accessible to more people, applicable to more complex problems, and usable by large-scale AI systems.
The comparision between Litex and Lean is on our website(https://litexlang.com). There is also a straightforward tutorial about it on our web that you do not want to miss.
Contact me if you are interested! Really hope we can scale formal reasoning in AI era together!
The website tells me it's simple over and over but not what it is. What're the semantics? Which mathematical system is this? What can it prove?
Thank you Jon, I will put the semantics and the mathematical system behind online soon! Just give me some time!
> Even Kids can formalize the multivariate equation in Litex in 2 minutes, while it [takes] an experienced expert hours of work in Lean 4.
Well, I propose an alternative proof in lean4:
---One thing I like about the lean proof, as opposed to the litex proof, is that it specifies why the steps are correct. If litex's strategy is "you describe the steps you want to take, and litex will automatically figure out why they're correct", how are you supposed to do any nontrivial proofs?
> how are you supposed to do any nontrivial proofs?
One should also take a look at Isabelle/HOLs AFP here. You can get very far with Metis et al but it is very inefficient computationally.
Especially when proofs get larger and/or layer on abstractions (proving something nontrivial likely involves building on existing algorithms etc.) the ability to make proofs efficient to verify is important.
``` If litex's strategy is "you describe the steps you want to take, and litex will automatically figure out why they're correct", how are you supposed to do any nontrivial proofs? ```
HAHA, what i am saying here is, each statement you write, is automatically checked by Litex, using its known specific facts and universal facts. Since any math proof, no matter complex or simple, can be divided into many small steps and each small step can be verified in that way, I guess it is fair to say "automatically figure out why they're correct" to a non-mathematicl person when introducing Litex.
How does it escale for NxN system? Let's say N=10 that is the maximal amount I'd write by hand (in exchange for a lot of money).
I took a look at the example and it's very intuitive. I'm trying to guess which heuristics is it using. Does it try to find exact matches of whatever is inside the parenthesis?
@GP?: Doesn't Lean have a "please fill the blanks" mode for short proof?
> @GP?: Doesn't Lean have a "please fill the blanks" mode for short proof?
Yes, there are several depending on what you want to do. The catchall is `simp` (short for 'simplify').
Why do we need more than one "do what I mean" command? (Or more than zero "do what I say" commands?) Because it's impossible to know what you mean. If you try to use a lot of `simp`, you'll notice that you often have to tell it what to do.
Note that what we really want for this problem is a linear algebra package that can do row reduction, not a formal prover. This problem is all arithmetic.
Thank you thau! Your example is pretty interesting! I avoid using any advanved Mathlib tactic to make the comparison fairer. We are comparing Lean and Litex under conditions where they don’t rely too much on external packages, which makes the comparison a bit fairer. (since Lean does have a very rich set of libraries, but building libraries is itself a challenge. Litex really needs to learn from Lean on how to build a successful library!).)(afterall, Litex can also abstract all proofs here and give it a name linear_combination, right?)
> Your example is pretty interesting! I avoid using any advanved Mathlib tactic to make the comparison fairer.
I learned about the `linear_combination` tactic from your example. Other than that, I use `have` and `exact`, which (a) are not advanced, and (b) are also used in your example.
Before that, my first attempt at the lean proof looked like this:
> We are comparing Lean and Litex under conditions where they don’t rely too much on external packagesThis proof does have the advantage of not needing to import Mathlib.Tactic. Although again, that's something your proof does.
So Litex is not based on Type Theory I gather. How are proofs represented and checked?
This looks potentially interesting. The "cheat sheet" seems the most useful document listed but it lost me there:
> Use `have` to declare an object with checking its existence.
an object with what?
With checking-its-existence. With checking of its existence. With existence checking. While checking its existence. OK it could be better written.
haha, you are right bro!
The tutorial explains the language in much more detail: https://litexlang.com/doc/Tutorial/Introduction
have is used to ensure the existence of the object you define. For example, you do not want to declare a new object when it is from an empty set!
How did you formally test this claim: "Even Kids can formalize the multivariate equation in Litex in 2 minutes"
Yeah, I don't think the authors _actually_ mean that. I think English isn't their first language.
We should try to be charitable (but with a healthy amount of skepticism!); it's possible they meant "Even a child [with a good understanding of Litex] could [mechanically] formalize this multivariate equation in Litex in 2 minutes [as opposed to remembering and writing Lean 4 syntax]"
HAHA, thank you fallat, I guess you are right!
Reminded me of that article about hoobastanking the Snarfus from a couple days back[1]. And xkcd 2501, of course.
[1]https://anniemueller.com/posts/how-i-a-non-developer-read-th...
Kids hardly know what a multivariate equation is. Unless you use "kid" to denote 20-year old college students enrolled in a math program which some people do.
The other claim is doubtful too:
> while it require an experienced expert hours of work in Lean 4.
No, it doesn't. If you have an actual expert, it only takes a few minutes.
And besides, isn't this exactly what an artificial intelligence would solve? Take some complex system and derive something from it. That's exactly what intelligence is about. But LLMs can't deal with the complex but very logical (by definition) and unambiguous system like Lean so we need to dumb it down.
Turns out, LLMs are not actually intelligent! We should stop calling them that. Unfortunately, there are too many folks in our industry following this hyped-up terminology. It's delusional.
Note that I'm not saying LLMs are useless. They are very useful for many applications. But they are not intelligent.
I studied 2x2 linear equation system in high school at 14 (13?) y.o. It was a technical school with more math and physics, and later specialization (like chemistry, electronics, building, ...). I think in a normal school they study that at 16 y.o.
We also teach 2x2 systems to 18 y.o. in the fists year of the university for architects, medics and other degree that don't need a huge amount on math. (Other degrees like engineering or physics get 4x4 or bigger systems that definitively need the Gauss method.)
And if you ask one of those medics 5 years later to solve one, the response might make you depressed. (Or just 4 weeks after their maths exam.)
Unfortunately, whenever you try to exclude LLMs from the holy church of intelligence based on what they can't do, you end up excluding a whole lot of humans too.
That's underestimating human intelligence.
Even low-IQ humans can in principle learn how to use Lean to represent a multivariate system. It might take a while, but in principle their brain is capable of that feat. In contrast, no matter how long I sit down with ChatGPT or Gemini or whatnot, it won't be able to. Because they are not intelligent.
It's a great achievement of the AI hype that the burden of proof has been reversed. Here I am, having to defend my claim that they are not intelligent. But the burden of proof should be on those claiming intelligence. The claim that earth is a sphere was extraordinary and needed convincing evidence. The claim that species have evolved through evolution was. But the claim that LLMs are intelligent is so self-evident that rejecting the idea needs evidence? That's upside-down!
> Here I am, having to defend my claim that they are not intelligent.
It's because all you've done is made a claim without any evidence. Someone pointed out a challenge that most claims about them not being intelligent can't submit any evidence that can't also be met by an LLM.
But instead of submitting any evidence to support your claim, you descended into hyperbole about how hard done by you are being expected to support your claims.
In science, it's okay to say we don't know. The amount of disagreement - even amongst smart people - about if LLMs are intelligent or not, suggest to me that we just don't have universally accepted research and definitions that are tight enough to decisively say.
But you're talking not only like you _know_ the answer for sure, so much that you don't need to support it with evidence or credentials, because those who disagree are obviously just poor victims of the AI hype machine.
Please make sure you pass your knowledge of your LLM discoveries onto the scientific community, you could change the world!
Did you read my post? The claim is "LLMs are intelligent". And instead if requiring evidence for that, apparently most folks (including you) are fine with just accepting that claim and require evidence if somebody questions this. That's what I'm doing.
It's like a religion. "God exists" is the claim. Nobody needs to provide evidence that this is not the case. LLMs are intelligent" is the claim. Nobody needs to provide evidence against that. In either case, the burden of proof is with the one making the claim.
> In science, it's okay to say we don't know
But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".
No you're not questioning it, you're making statements against it, without evidence, which is just as useless as the statement without evidence.
Like I suggested, like the person you responded to suggested: when science tries to prove or disprove LLM intelligence it generally descends into disagreements about definition or evidence (neither of which you provided).
The reason why no evidence was provided for the original LLMs are intelligent claim is because - if you read through this thread - you were the first to make that claim, and the counter claim.
> But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".
I don't know what you want me to say here - if you want to continue acting like there's some widely accepted and agreed on definition of intelligence that everyone who isn't you is an idiot for not knowing, then carry on.
I don't have a reliable definition for intelligence. Like I said, if you do, please share your finding with science, you could settle some fairly big debates and change the world in a meaningful way.
My claims:
1. LLMs are widely called "intelligent". Evidence for my claim: The term "artificial intelligence" that is used everywhere. It has its own TLD.
2. There is no evidence that this terminology is applicable. Questioning it faces some variant of "well do you have evidence to the contrary?". Evidence for my claim: This thread.
You are welcome to disprove my claims, as in the scientific spirit that you say you uphold.
> The term "artificial intelligence" that is used everywhere. It has its own TLD.
That's the country code TLD for Anguilla: https://en.wikipedia.org/wiki/.ai
Sure. The top 100 domains with that TLD still have little to do with Anguilla.
the way it goes is that a device/program of some sort displays a broad number of behaviours that previously was believed to be a future sign of artificial intelligence (e.g. somewhat coherent text, Turing test, giving the impression of following instructions and arriving at the correct results, etc).
some people claim this is artificial intelligence of a lower quality than humans, and these people expect that such mechanisms will eventually match and then potentially surpass humans.
then there's another crowd coming along and claiming no, this isn't intelligence at all, for example it can't tie its shoelaces.
my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X, no matter how many years you might try to coach them. (for example, I will never be a musician/composer. I simply lack the gene.)
The retort is always "oh but in principle a human could do this". well, maybe next year's LLM will do it in practice, not just in principle, for all I know.
As they say, person who says it can't be done should not stop person doing it.
Heavier than air flight was once thought to be impossible. As long as you don't have a solid mathematical theorem that says only carbon replicators born from sexual intercourse can be intelligent, I expect some day silicon devices will do everything carbon creatures can do and more.
> my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X,
Indeed, the point you are making is reasonable. But I'm trying to say that the premise is wrong. Nobody should be expected to come up with a reason why it is not intelligence. We should expect to be presented with evidence that it is intelligence. Absent that, the null hypothesis is that it isn't, just like any other computer program before isn't, uncontroversially.
I'm sure you already got my point, apologies for repeating it, but some clarification to clearly carve out our points may not hurt.
>Nobody should be expected to come up with a reason why it is not intelligence.
I'm not asking anybody to come up with a reason why it's not intelligence. I'm telling people they're wrong when they do try to justify calling it not intelligent. if you want to gatekeep a word, you should at least try to define it and then stick to the definition.
It is intelligence in the sense that if somebody had described it in 2010, we would have said yes, that's intelligence and it's hundreds of years away. It isn't intelligence in the sense that it's now here and we've found holes in the story.
Intelligence is so poorly defined that it's an ever receding finish line that somehow we're supposed to cross before we can call the device intelligent.
As Dennett said, it's like magic. Magic that is possible is just tricks. Real magic is that which is impossible.
IDK, they look intelligent, like the world looks flat.
As opposed to most humans? Have you tried reasoning with somebody "just trying to do my job sir"?
> Even low-IQ humans can in principle learn how to use Lean to represent a multivariate system.
That's an article of faith. In principle, elephants can fly at least once.
Only if you're pedantic about it. I find I can arrive at all sorts of absurd conclusions like that by being extremely pedantic.
It's almost as if thinking carefully about words leads to the realisation that words are approximations that are meant to describe, not prescribe.
If "intelligence" describes LLMs then it isn't doing a very good job.
Quite flawed, but inspired. This stuff popping up is interesting. I guess it is due to Lean reaching people that would not be aware of formal reasoning on a computer before.
Thank you auggierose. Your comment is by far the best description of the stage of Litex is now: very flawed, but very different from other formal languages. I guess it is because Litex is closer to reasoning (or math in general) rather than to programming.
What's a formal language?
The only definition I know of is https://en.wikipedia.org/wiki/Formal_language. I also think that is the commonly accepted definition.
Taking that as the definition, this definitely is not the first formal language learnable in 1-2 hours. I would think, for example, that the language consisting of just the empty string is older and learnable in 1-2 hours.
They probably mean something like “formal language used for writing mathematical proofs that is (about) as powerful as Lean”, though.
You can write "formal" proofs in this language for mathematical theorems. They are "formal" because they are so detailed that they are machine checkable. That's in contrast to the "informal" pen and paper proofs that people normally produce.
Besides pure maths you can also use that to verify the correctness of software. E.g. say you implemented a shortest path algorithm:
You could proof something like: For all `graph` and each `path` in `graph` from `start` to `end`:The title seems to be misusing the term “formal language”: https://en.wikipedia.org/wiki/Formal_language
The simplest formal language is the empty set, which I would argue doesn’t take hours to learn.
So “formal language” is almost certainly not what is meant here, but it’s not clear what else exactly is meant either.
Can Litex and Lean be transpiled?
Working on that bro :)
I got kind of lost at this part of the tutorial (https://litexlang.com/doc/Tutorial/Know):
How does that assumption in the first line have any effect? Surely the underlying theory of naturals should be enough to derive 47 >= 17 ?And in general I am very skeptical of the claim that Litex "can be learned by anyone in 1–2 hours". Even just the difference between `have`/`let`/`know` would take a while to master. The syntax for functions is not at all intuitive (and understandably so!). & so on. The trivial subset of the language used in the README may be easy to learn but a) it would not get you very far b) most likely every other related toolbox (Lean, HOL, etc) has a similar "trivial" fragment.
But, always good to see effort in this problem space!
The first line is essential, because Litex does not implement transitivity of >= in its kernel and one has to formalize it: know @larger_equal_is_transitive(x, y, z R): x >= y y >= z
Thank you for clarifying, but don't you think this puts a rather big dent in the claim that Litex is "intuitive" and "can be learned by anyone in 1–2 hours"? I think the average user would expect the natural/real numbers to come equipped with this kind of theorems.
For instance, the tutorial says that "The daily properties" (whatever this means) of "+, -, , /, ^, %" are "already in the Litex kernel". What about associativity of and +, or distribution of * over +? Are these part of the "daily properties"? And if so, why didn't transitivity of >= not make the cut?
Just trying to understand the design choices here, this is very interesting.
know @self_defined_axiom_larger_equal_is_transitive(x, y, z R): x >= y y >= z =>: x >= z
Since transitivity of >= is not implemented, one has to call this self_defined_axiom_larger_equal_is_transitive to make x >= 17 here, so
``` know forall x N: x >= 47 => x >= 17 ```
is essential
currently reading through the tutorial. don't have much experience with coq, lean and friends, but this looks like a nice language to get started with formal proofs.
fyi: they finally renamed Coq. It's called Rocq now.
Guess they had to change the logo too? Just because evangelical anglophone users couldn't get past the name sounding like "cock" or what?
Yes, but I don't think it has anything to do with evangelicalism. It's just like Uranus. You can't talk about without it always being a bit unserious.
Thank you aktuel!
This github README is written by an LLM.
It is more likely that the author is not a native english speaker (project author seems to be chinese) and may have used some tool to help with the translation.
I have also been accused of being a Markhov chain, before 2022. I communicate in English only for work and social media so writing may sometimes seem strange to native speakers.
haha, no, it is not. visit my git commits and you can see the readme has been updated ~1000 times! I really want my readme look good!
I doubt it? And if it is, honestly best LLM readme I've seen. What makes you think so?
>Litex(website) is a simple, intuitive, and open-source formal language for coding reasoning (Star the repo!). It ensures every step of your reasoning is correct, and is actually the first reasoning formal language (or formal language for short) that can be learned by anyone in 1–2 hours, even without math or programming background.
>Making Litex intuitive to both humans and AI is Litex's core mission. This is how Litex scales formal reasoning: by making it accessible to more people, applicable to more complex problems, and usable by large-scale AI systems. These benefits stem from Litex's potential to lower the entrance barrier by 10x and reduce the cost of constructing formalized proofs by 10x, making formal reasoning as natural as writing.
>Even Kids can formalize the multivariate equation in Litex in 2 minutes, while it require an experienced expert hours of work in Lean 4. It is a typical example of how Litex lowers the entrance barrier by 10x, lowers the cost of constructing formalized proofs by 10x, making formalization as easy and fast as natural writing. No foreign keywords, no twisted syntax, or complex semantics. Just plain reasoning.
Boilerplate and constant repetition
Humans are perfectly capable of doing that on their own. The random parentheticals and the sentence structures are very human to me.
No LLM would write that without prompting it to be illiterate in English.