Last year I used LLM to solve AoC, to see how they could keep up, to learn how to steer them and to see how the open models will perform. When I talk about it, quite a bit of "programmers" get upset. Glad to see that Norvig is experimenting.
p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
It's quite foolish of you to make that assumption. To begin with, with my timezone and when I could get to it, I was starting 12 hrs after the release so the leaderboard was useless. I was writing about it openly on the internet, pointing out the huddles I faced and how much effort it took to get the LLM to generate the correct solutions.
> it's quite foolish of you to make that assumption
To have assumed that you didn't submit to the leaderboard? I'm trying to give you the benefit of the doubt on not ruining the competition for everyone else. You can do whatever you want on your own time.
If you were submitting, despite as I recall AOC specifically saying not to submit ai solutions, then you know exactly why people were upset. If you weren't, them you're being aggressive at me for no reason.
I enjoy reading Peter’s ‘Python studies’ and was surprised to see here a comparison of different LLMs for solving advent of code problems, but the linked article is pretty cool.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
I'm sorry, but what's the point here ? It's not for a job or improve a LLM or doing something useful per se, just to "enjoy" how version X or Y of an LLM can solve problems.
I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
You are conflating "hype" with any positive outlook. It has some uses and some people are using it. That's not "hype". It is exhausting to see it everywhere so I sympathize.
I will stop when the AI bubble will burst and people will stop throwing at my face everyday about statistical models. It is litterally everywhere I look, I did not ask for it even when I filter the inputs.
IMHO it would be nice to have an AI summarizer/filter (see the irony ?) for tech news (hn maybe ?) that filters out everything about AI, LLMs and company.
Last year I used LLM to solve AoC, to see how they could keep up, to learn how to steer them and to see how the open models will perform. When I talk about it, quite a bit of "programmers" get upset. Glad to see that Norvig is experimenting.
p/s, anyone who gets upset that folks are experimenting with LLMs to generate code or solve AoC should have their programmer's card revoked.
Did you make it clear that you weren't submitting to the leaderboards? I of course assume you weren't.
Most of the hubbub I saw was because AI code making it into those leaderboards very clearly violates the spirit of competition.
It's quite foolish of you to make that assumption. To begin with, with my timezone and when I could get to it, I was starting 12 hrs after the release so the leaderboard was useless. I was writing about it openly on the internet, pointing out the huddles I faced and how much effort it took to get the LLM to generate the correct solutions.
> it's quite foolish of you to make that assumption
To have assumed that you didn't submit to the leaderboard? I'm trying to give you the benefit of the doubt on not ruining the competition for everyone else. You can do whatever you want on your own time.
If you were submitting, despite as I recall AOC specifically saying not to submit ai solutions, then you know exactly why people were upset. If you weren't, them you're being aggressive at me for no reason.
I enjoy reading Peter’s ‘Python studies’ and was surprised to see here a comparison of different LLMs for solving advent of code problems, but the linked article is pretty cool.
Peter and a friend of his wrote an article over a year ago discussing whether or not LLMs are already AGI, and after re-reading that article my opinion was moved a bit to: LLMs are AGI in broad digital domains. I still need to see embodied AI in robots and physical devices before I think we are 100% of the way there. Still, I apply Gemini and also a lot of open weight models to both 1. coding problems and 2. after I read or watch material on Philosophy I almost always ask Gemini for a summary, references, and a short discussion based on what Gemini knows about me.
> I started with the Gemini 3 Pro Fast model ...
Quiet product announcement.
Odd that it came up with
when is fine.I'm sorry, but what's the point here ? It's not for a job or improve a LLM or doing something useful per se, just to "enjoy" how version X or Y of an LLM can solve problems.
I don't want to sound grumpy or but it doesn't achieve anything, this is just a showcase of how a "calculator with a small probability of failure can succeed".
Move on, do something useful, don't stop being amazed by AI but please stop throwing it at my face.
Author is Peter Norvig, who has definitely done “something useful” when it comes to AI. He’s earned some time for play.
> I don't want to sound grumpy
Well, you didn't try very hard :)
If you think that every model behaves the same way in terms of programming, you don't have a lot of experience with them.
I find it useful to see how other people use all kinds of tools. AI is no different.
It's like getting upset when someone compares how it's like using bun vs deno vs node.
So, how about you move on, do something useful, don't stop being annoyed by AI, but please stop throwing your opinion in anyone's face.
One could argue that pointing out the pointlessness of LLM hype is infact useful, while producing that same hype is not
You are conflating "hype" with any positive outlook. It has some uses and some people are using it. That's not "hype". It is exhausting to see it everywhere so I sympathize.
I will stop when the AI bubble will burst and people will stop throwing at my face everyday about statistical models. It is litterally everywhere I look, I did not ask for it even when I filter the inputs.
IMHO it would be nice to have an AI summarizer/filter (see the irony ?) for tech news (hn maybe ?) that filters out everything about AI, LLMs and company.