120 comments

  • ofirpress 20 hours ago

    [I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

    This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

    This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.

    • comex 19 hours ago

      The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?

      Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.

      • typpilol 18 hours ago

        Ya what he links directly contradicts what he's saying lol

    • _cs2017_ 16 hours ago

      Even if this bug never existed, models can still see lookahead commits during pretraining. Do we expect this bug to have a greater impact than the pretraining leakage?

      Obviously having something available during test time is more valuable than buried somewhere in the pretraining mixture. But in pretraining it happens presumably with high probability (why wouldn't coding models pretrain on the entire github), while in test time it apparently happened only very occasionally?

    • bflesch 18 hours ago

      > This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.

      You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?

      > This doesn't change the overall picture or trends at all.

      Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".

      • cjsaltlake 18 hours ago

        I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.

        • numbsafari 5 hours ago

          Your classic bug is being used as justification to destroy the careers and lives of tens of thousands of people. Read the room.

      • lieret 16 hours ago

        [Also on the SWE-bench team] Part of the reason why this didn't surface earlier was that it only seems to affect more recent models, maybe the result of reward hacking during posttraining. We're currently working on making trajectories easier to access for everyone through a web tool (rather than having to download things from aws) to get even more eyes on the trajectories. The interface will also include search & LM inspection tools to specifically look for anything that might qualify as cheating.

      • doctorpangloss 13 hours ago

        > other maybe extremely basic edge cases were missed?

        The whole testing enterprise is kind of stupid. Pray tell, if their stupid little benchmark said, "this niche little smaller model performs the best" would anyone listen to it? No.

        The thing that is fucked about benchmarks is that we only pay attention to the ones that match these vibes: "The latest models from the biggest companies should perform the best." That's why they are stupid. They could be the most brilliantly administered (they're not), nail execution (they don't), but it still has to confirm vibes.

        And listen these guys are serious academics, they're very smart people, but on the other hand, you know, I'm still right. The team doesn't have a secular, objective explanation for why nobody talks about benchmarks that don't confirm the biases of the public for what should perform well. Three people are commenting on just this post alone, but the stuff that I am saying: crickets.

        The only reasonable explanation for "why do people ignore [LLM tests that show that some non-giant corporation LLM is the best]?" trades on cultural and humanities stuff that are outside their expertise. They don't see that the stuff the humanities people are saying generalizes to what they do. That would be too inconvenient. Every testing system suffers from this bias anomaly, it's just easier to talk about this with something secular like LLMs compared to say, tests of children.

        They hear biases and they're like, "something something, Algorithmic Justice League." Their brains turn off and they think that until someone gets in front of Congress and points a finger, nothing in the humanities applies to them. Wrong. The Princeton lab has probably met with a lot of humanities people, and there was a lot of head shaking and agreement, but it's not like, something that tells them that their whole enterprise doesn't make sense makes them stop and pursue anything else. It's just in one ear and out the other.

        Doing free tests for giant corporations to market their shit, and then toiling away in obscurity when the tests do not market huge corporation's shit: it doesn't make sense period. But that's what they're doing.

        If you need a simple theory for how Big LLM performs so well on SWE-Bench, it's as simple as: well they've seen the questions by running them, obviously, and someone has also tested the questions in their own personal chatbot sessions sometime in the past, and these are online systems, and OpenAI, Anthropic and Google run ETL pipelines that paraphrase user data for salient inputs to train on, so of course, they've all been trained on the test set. In reality, if these things were so fucking good as SWE Bench said, they'd be making a bajillion bucks making all this enterprise software, or they'd show even 1 novel math discovery, or whatever. But they do not have something as powerful as the benchmarks say, so that doesn't happen.

      • mustaphah 18 hours ago

        > You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case [...]

        I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.

        Edit #1: I didn't mean to imply bad intent; just thinking out loud.

        Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y

        • gchamonlive 17 hours ago

          > I didn't mean to imply bad intent

          > I wouldn't be surprised if they left this loophole on purpose

          You didn't imply bad intent, you outright suggested it.

          • coldtea 17 hours ago

            He means he doesn't say it was necessarily bad intent, but mentions it as a possibility ("thinking out loud").

          • mustaphah 17 hours ago

            I could've phrased it better.

            • gchamonlive 17 hours ago

              You could rewrite it a 1000 times, if the underlying idea is the same, suggesting something you don't know it's true, the outcome would be the same. Or did you mean something else? What was your intention with the message?

              • mustaphah 16 hours ago

                I meant it as a hint for anyone inclined to dig deeper. It's a possibility rather than something we can confidently dismiss.

                • gchamonlive 16 hours ago

                  If it's a possibility and you don't want to dig deeper better to sit out and not comment anything at all, lest you risk defamation.

                  Thinking out loud also doesn't make defamation acceptable.

                  • TheDong 12 hours ago

                    It's fine, this is an american site so JAQing is in fact safe under free speech.

                    You're welcome to ask b "would none rid me of this meddlesome priest" with no fear

                    • gchamonlive 6 hours ago

                      And I'm protected under free speech to try to educate people about good manners, so it's fine too.

        • faangguyindia 11 hours ago

          never attribute something to malice which can be attributed to incompetence. Basically, this has been utilized plenty of times by some really smart folk to get what they want.

        • cjsaltlake 18 hours ago

          We absolutely did not.

          • coldtea 17 hours ago

            Of course that's what a team that did it on purpose would also say :)

    • enum 16 hours ago

      SGTM. The transparency is good.

    • franktankbank 18 hours ago

      #tiny

    • segmondy 18 hours ago

      reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence

      • bflesch 18 hours ago

        I love the "cheating is a sign of intelligence" sound bite you provided. When AI engineers cheat we should applaud their intelligence and their lack of ethics.

        "Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]

        Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.

        [1] https://en.wikipedia.org/wiki/Cheating_(disambiguation)

        • segmondy 16 hours ago

          would it have been better if I called it "shortcut" instead of cheating? all shortcuts are called cheating until people decide on it's fairness. the AI has been given a task to fix a bug, the AI figured out that looking at other PR might yield a solution, if it was a human that did so, it would clearly be called cheating. Does AI know that it's cheating? Was it prompted to solve it without cheating? If you give AI access to the internet and quiz it, it would use info from the net to answer. Does that really skew it's score? Is it cheating? Is it a sign of intelligence? Sure, I think all of those.

          https://en.wikipedia.org/wiki/Reward_hacking

        • giveita 17 hours ago

          Is it wrong? Aren't ethics and intelligence two different axes?

          • coldtea 17 hours ago

            Different, but probably not as orthogonal as one might think.

            E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).

            So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.

            • robcohen 16 hours ago

              Aren't there only two rules that all groups follow in the animal kingdom?

              - don't lie too often

              - don't kill members of the in group

              Seems like these would be required for any group to survive, which makes sense why they are universal. All other rules/ethics seem to be dependent on resource scarcity.

              • DrScientist 42 minutes ago

                Groups don't follow rules as such, group behaviours emerge from the interaction of individual behaviours.

                As to whether all groups display those rules - I suspect not - though it rather does depend on how you define a group - the definition of group probably has some sort of colloboration built in ( as oppose to a bunch of indviduals that happen to live in the same geographic area ).

              • coldtea 15 hours ago

                >All other rules/ethics seem to be dependent on resource scarcity

                That doesn't make the rest of the ethics (as a rule and mechanism) any less useful to help nurture the species and its intelligence.

                It just makes them not absolute but dynamic and condition dependent. But given a condition (e.g. resource scarcity) the appropriate ethics retain the utility we talk about.

  • piskov 21 hours ago

    Not “may be”: just look how swe-bench scores drop to single digits once it in C#

    https://arxiv.org/html/2506.12286v3

    • fine_tune 21 hours ago

      I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).

      So kinda neat to see this paper!

      [0]https://github.blog/news-insights/octoverse/octoverse-2024/#...

      • CuriouslyC 19 hours ago

        The big labs are almost certainly using compiler/repl output for generated code as an oracle for RL. I doubt they have C# in the mix.

        • tomjakubowski 18 hours ago

          Why do you doubt that? It's a widely used language. And there is even an open source C# REPL.

          • CuriouslyC 17 hours ago

            Because RL time is expensive and I don't think the languages which are more popular than C# have such high performance that it's worth bumping their batches for C#.

            • stingraycharles 16 hours ago

              But C# is a typical enterprise language which has people who are willing to pay a lot of money for AI.

              We’re just guessing and the fact of the matter is that we don’t know what inputs they use for their models.

      • yieldcrv 20 hours ago

        5th most used language based on private repos that the group making the report has the exclusive direct access to seeing

        I don't see that contradicting your assumption

        • BoorishBears 19 hours ago

          "In this year’s Octoverse report, we study how public and open source activity on GitHub..."

    • stefan_ 21 hours ago

      So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

      I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.

      • yorwba 20 hours ago

        The "Verified" part of "SWE-Bench Verified" means that there was plain "SWE-Bench" before it, which had actually not been verified at all and included a lot of tasks that didn't really make sense for use as a benchmark: https://openai.com/index/introducing-swe-bench-verified/#ada...

        Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.

      • jsheard 20 hours ago

        > So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

        Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.

        • hhh 3 hours ago

          Verified has a completely different meaning for this, it's that the questions have verified valid solutions.

        • geekymartian 20 hours ago

          that was my exact thought. how fitting

      • lieret 16 hours ago

        [On the SWE-bench team] As someone pointed out SWE-bench Verified is a subset of tasks that were reviewed to be solvable (i.e., have enough context in the task description) as well are scored with unit tests that aren't overly specific to rule out valid solutions.

        We've all read & analyzed a large number of agent trajectories. This loophole seems to be something that popped up with the more recent models and we simply weren't aware of it.

        As discussed in the github issue, there's a fix in the new version of the SWE-bench containers (currently being rolled out) that makes sure that the relevant commits aren't available.

        Part of what makes SWE-bench a very interesting benchmark is the enormous action space that agents that compete on it can take. However that also means that there's unexpected things happening when models get better. We're currently working on making all agent runs easily browsable on a website (rather than having to download our AWS buckets) to get even more eyes on the trajectories. Thanks to everyone who uncovered this loophole.

      • sebzim4500 19 hours ago

        The verified refers to the fact that the benchmark problems were verified by human experts to be reasonable.

        It says nothing about data contamination, which would depend on the model and would not be the fault of the benchmark.

      • blibble 18 hours ago

        > I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing?

        I doubt any of the AI company employees are encouraged to go looking for cheating

    • teaearlgraycold 20 hours ago

      Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.

      • phatskat 9 hours ago

        > the delusion that LLMs have thinking capacity

        It’s such a strange delusion too, because it’s easy to get caught up in for a moment and it’s easy to remember “oh no this thing is as smart as a bag of bricks”.

        What strikes me more is how these companies sell their AI offerings - we watched an OpenAI presentation about spec-driven development recently and the presenter was fairly, idk, fine enough if maybe a bit grandiose. But what really nagged me was the way he ended his presentation with something along the lines of “we’re excited to see AGI continue to grow” and it’s honestly A) depressing and B) downright fraud - there is no current AGI to speak of, it’s all just guessing the string of words that sound best together and this OpenAI rep _knows this_.

        They know that no amount of up-front spec writing will prevent bugs.

        They know that their LLM doesn’t “know” anything in an actually meaningful way.

        They know that calling what they have “AGI” is aspirational at best and lying at worst.

  • slacktivism123 20 hours ago

    Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

    It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".

    Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

    Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

    • Workaccount2 18 hours ago

      The best benchmark is the community vibe in the weeks following a release.

      Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.

      (yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)

      • diggan 2 hours ago

        > The best benchmark is the community vibe in the weeks following a release.

        True, just be careful what community you use as a vibe-check. Most of the mainstream/big ones around AI and LLMs basically have influence campaigns run against them, are made of giant hive-minds that all think alike and you need to carefully asses if anything you're reading is true or not, and votes tend to make it even worse.

      • wubrr 18 hours ago

        the vibes are just a collection anecdotes

    • k__ 19 hours ago

      Yes, often you see huge gains in some benchmark, then the model is ran through Aider's polyglot benchmark and doesn't even hit 60%.

  • zelphirkalt 6 hours ago

    Can anyone tell me what is the difficulty in simply not having .git at all during a benchmark run? Why not simply remove anything that is not the code the benchmark runs on? Or just simple oversight?

    • sigmoid10 6 hours ago

      Coding agents are so powerful because they are not just looking at static code. Looking through git histories is a valid method for humans to solve certain kinds of bugs, so it makes sense that models should also be able to do that too. And realistically, a lot of modern production code will have git information, so it's not like this wouldn't be a common real world application.

      • ActionHank 33 minutes ago

        That is a weak argument.

        The point is to benchmark against a human solving a problem. Typically these problems are posed as a question or a blank project, without that history.

        You are arguing for a an apples to oranges comparison because the LLM performs better. Rather than a realistic comparison.

        • sigmoid10 21 minutes ago

          You apparently don't know what SWE-bench is [1]. First of all, it tries to evaluate skills that explicitly go beyond blank project questions with given solutions. Secondly, it does not contain "optimal" or sometimes even correct solutions. That's because it uses real world software development examples from actual PRs in popular repos. These very likely had humans use all the tools at their disposal as well (e.g. web search, git commands, code execution). Assuming an LLM could have solved these just by looking at a piece of code turns out to be very myopic.

          [1] https://arxiv.org/html/2310.06770v3

      • fp64 3 hours ago

        Well, there's legacy code and/or horrible git history that also needs fixing at some point. Also I have witnessed how the history can send you down a wrong path. I don't agree that this is a good argument.

      • diggan 6 hours ago

        I think this issue is specifically about the agents looking at "future repository state" (according to the linked issue at least), so while looking at the history might be a normal method for solving issues, running `git log --all` to take a peek at the future which already includes the fix isn't very typical (yet?).

  • mustaphah 20 hours ago

    I speculate something similar (or even worse) is going on with Terminal-Bench [1].

    Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.

    [1] https://www.tbench.ai/leaderboard

    • Bolwin 18 hours ago

      They're all using claude so idk. Claude code is just a program, the magic is mainly in the model

    • cma 18 hours ago

      Claude code was severely degraded the last few weeks, very simple terminal prompts were failing for me that it never had problems with.

      • giveita 17 hours ago

        Follow the money. Or how much comes from your pocket vs. VC and big tech speculators.

        • cma 16 hours ago

          They did a big fundraising round right after so it's easy to suspect they were manipulating profitability growth for it.

  • Aperocky 19 hours ago

    epochs ago when random forest was part of machine learning nomenclature, we had a strong claim from an adjacent team in the form of a powerpoint circulated upwards that they had achieved almost perfect prediction accuracy.

    We relatively quickly identified that the testing set are taken directly from the training set, but the claim has been advertised already so they were more difficult to retract... if it were at all, I left shortly after.

    The incentives are not aligned with accurate reporting.

  • jbellis 16 hours ago

    swe-bench's bigger problems include (1) labs train on the test and (2) 50% of the tickets are from django; it's not a representative dataset even if all you care about is Python.

    I created a new benchmark from Java commits that are new in the past 6 months to add some variety: https://brokk.ai/power-ranking

  • mbowcut2 20 hours ago

    I'm not surprised. People really thought the models just kept getting better and better?

    • segmondy 18 hours ago

      The models are getting better and better.

      • giveita 17 hours ago

        That's expected. No one will release a worse model.

        • sodality2 17 hours ago

          Not a cheaper one, or better in some ways, or lower latency, etc?

          • giveita 14 hours ago

            They do that too but right now it is an arms race as well.

    • guerrilla 19 hours ago

      Maybe. How would I know?

    • jMyles 19 hours ago

      ...even if the agent did "cheat", I think that having the capacity to figure out that it was being evaluated, find the repo containing the logic of that evaluation, and find the expected solution to the problem it faced... is "better" than anything that the models were able to do a couple years ago.

  • bryan0 17 hours ago

    hah the model should get extra credit for discovering this!

    > Now I understand the situation perfectly! The issue described in the problem statement is a real bug that was already identified and fixed in later versions of pytest. Since we're working with pytest 5.2.4, we need to apply the same fix.

    https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

  • jasonjmcghee 21 hours ago

    Very interested to see the updated results. This could really shake up the leaderboard.

    • macawfish 21 hours ago

      I hope it does. These coding benchmarks have often seemed frustratingly out of touch with my experience.

      • 3abiton 20 hours ago

        Because I would argue there is no benchmark to rule them all. It highly depends on individual use cases.

      • typpilol 18 hours ago

        The agentic ones seem better. Typescript is like at 25% last I saw on the models. Python was higher.

        That seems more accurate than the huge scores the other ones get

  • zaptheimpaler 20 hours ago

    It's honestly ridiculous they left git history lying around during a benchmark, and this benchmark made to ICLR in Jan 2024 and no one has detected this issue until now. I don't really trust any benchmarking or tools or claims from this space when they can make such huge basic errors.

    • dolmen 19 hours ago

      Next models will use zero-day to escape the sandbox and access the answer.

    • Nijikokun 19 hours ago

      There was a lot of speculation whether or not the model would use them or even if it would attempt to use them and they noted this months ago. Now they have clear evidence of them doing so. Seems reasonable.

    • lieret 15 hours ago

      [On swe-bench team] We read and analyzed a lot of trajectories but seems like only recently models have started to exploit this in a small fraction of instances. But yes, clearly shouldn't have happened (and is now fixed in the new container versions).

  • epolanski 19 hours ago

    This is beyond sad and shameful.

    • falcor84 17 hours ago

      If you believe that you can develop a benchmark that wouldn't have any issues, please do so.

      • epolanski 16 hours ago

        So instead of calling out the cheaters we victim blame the benchmarks for leaving traces of exploits?

  • Traster 20 hours ago

    Man I feel so dumb. Why haven't I been doing this in my job, if I could just see the commit that fixed my issue this would all be so easy.

    • Noumenon72 20 hours ago

      Someone did comment that it's actually smart to check if something is fixed on the unstable branch, or I suppose in your coworkers' branches. A good task for an LLM.

    • falcor84 17 hours ago

      Oh, you haven't been using `git fetch-future-solution`?

  • OtherShrezzing 19 hours ago

    That the answers have been available to them in the environment, and they’re still not hitting 100% on this benchmark is a damning indictment of SOTA model performance.

    • raincole 19 hours ago

      It really isn't. Do you expect SOTA models to answer any answered question on the internet with 100% accuracy? Congrats you just compressed the whole internet (at least a few zettabytes) into a model (a few TB at most?).

      • OtherShrezzing 19 hours ago

        The linked ticket isn’t suggesting the commit is in the training data. It’s demonstrating that models run ‘git log’, find the exact code to fix the issue against which they’ll be scored, and then they implement that code as-is.

        The test environment contains the answers to the questions.

      • imiric 7 hours ago

        Well, we're dealing with (near) superintelligence here, according to the companies that created the models. Not only would I expect them to regurgitate the answers they were trained on, which includes practically the entire internet, but I would expect them to answer questions they weren't trained on. Maybe not with 100% accuracy, but certainly much higher than they do now.

        It's perfectly reasonable to expect a level of performance concordant with the marketing of these tools. Claiming this is superintelligence, while also excusing its poor performance is dishonest and false advertising.

    • aurareturn 19 hours ago

      Are you going to rail on humans for making this mistake in the first place?

      • themafia 19 hours ago

        No because that's the baseline. It's what you do when you have no other choice. Railing against that would be pointless.

        • ares623 19 hours ago

          i mean, if a human was claiming they could do that and successfully received billions to attempt to do it, and fail to deliver, i'd be railing against that particular human too

  • rockwotj 16 hours ago

    A friend is starting a company to do evals by just pitting models agent each other in simulations. Their teaser video is good (and humorous!)

    https://kradle.ai/

  • pseudosavant 18 hours ago

    If I was doing those tasks, and I found that someone had already fixed it in a future (from my git state) commit, I'd think I was being pretty smart to use that solution too.

    Turns out the test shouldn't have the answers included in it?

  • belter 20 hours ago

    In the meawhile, Oracle stock went up 40% in one one day, based on what Wall Street thinks AI might be...in 4 years...Not a bubble at all...

    • candiddevmike 20 hours ago

      I think Oracle's stock mostly popped due to a delayed reaction with the US GSA contract it secured in July and the revenue guidance probably related to it:

      https://www.oracle.com/news/announcement/blog/oracle-cloud-c...

      • belter 19 hours ago

        Lol...That contract has Oracle offering licenses at a discount of 75% and is estimated to make them not more than one 1 Billion. The other big contract on Cloud services the DoD JWCC is $8B to 9B but shared by four vendors (AWS, Microsoft, Google, Oracle) and Oracle orders under it are in the hundreds of millions not even 1 Billion...

        Wall Street is currently heavily punishing any company who misses their quarter, including NVIDIA!, after beating on their quarter.

        Oracle had a earnings miss in the current quarter!

        Their current REALITY is ~$15B quarterly revenue (with cloud infra ~$3B) and only ~$12B in near-term deferred backlog and deferred backlog is NOT revenue. To justify the valuation, this would imply OCI going from ~$18B in FY26 to ~$140B by FY30 that is an insane promise of +$120B in 4 years but back-loaded into the year 3 or year 4. :-))

        Capex needs ~$35B next year just to chase GPUs/power and if they miss one quarter the story implodes. The supposed rational, efficient market, is paying near $1T today for back-loaded hopes.

        Is completely bubble math. Like anybody, including Oracle AND their Customers, have ANY idea of their Capex in 4 years.

        Complete and total bubble.

        • Zacharias030 17 hours ago

          Thanks for that! where can I find your writing?

          • belter 15 hours ago

            History will prove me right. Just wait four years...

    • ksherlock 20 hours ago

      The real bubble will come once interest rates start dropping.

  • jgalt212 19 hours ago

    Baseball players cheat for tens of millions. The stakes are 2-4 orders of magnitude higher here. I'm not surprised in the least.

  • jMyles 19 hours ago

    Regardless of whether, during this particular evaluation, Claude 4 Sonnet looked at the solution to this particular problem in this particular git repo, this seems like a long-term intractable problem.

    How can we ever perform this sort of faux-neutral agentic evaluation in an environment where we want agents to have access to the sum total of knowledge (which will necessarily include being able to learn about the evaluation being conducted and its expectations)?

  • ripped_britches 16 hours ago

    Everyone on HN is like “yes I knew it! I was so right in 2021 that LLMs were just stochastic parrots!”

    Strangely one of the most predictable groups of people

    • pessimizer 16 hours ago

      Because they are. But stochastic parrots are awesome.

      • ripped_britches 15 hours ago

        I challenge you! Try giving this exact prompt to GPT-5-Thinking (medium or high reasoning if API). It is able to (without external code tools) solve a never before seen cypher that is not present in its training data. I think this pretty clearly demonstrates that the “stochastic parrot” is no longer an apt description of its capabilities in generalization:

        ————

        You are given a character-by-character decode table `mapping` and a `ciphertext`. Decode by replacing each ciphertext character `c` with `mapping[c]` (i.e., mapping maps ciphertext → plaintext). Do not guess; just apply the mapping.

        Return *ONLY* this JSON (no prose, no extra keys, no code fences):

        { "decoded_prefix": "<first 40 characters of the decoded plaintext>", "last_10": "<last 10 characters of the decoded plaintext>", "vowel_counts": {"a": <int>, "e": <int>, "i": <int>, "o": <int>, "u": <int>} }

        Inputs use only lowercase a–z.

        mapping = { "a":"c","b":"j","c":"b","d":"y","e":"w","f":"f","g":"l","h":"u","i":"m","j":"g", "k":"x","l":"i","m":"o","n":"n","o":"h","p":"a","q":"d","r":"t","s":"r","t":"v", "u":"p","v":"s","w":"z","x":"k","y":"q","z":"e" }

        ciphertext = "nykwnowotyttbqqylrzssyqcmarwwimkiodwgafzbfippmndzteqxkrqzzophqmqzlvgywgqyazoonieqonoqdnewwctbsbighrbmzltvlaudfolmznbzcmoafzbeopbzxbygxrjhmzcofdissvrlyeypibzzixsjwebhwdjatcjrzutcmyqstbutcxhtpjqskpojhdyvgofqzmlwyxfmojxsxmb"

        DO NOT USE ANY CODE EXECUTION TOOLS AT ALL. THAT IS CHEATING.

        • skate 6 hours ago

          As others pointed out this problem isn't special.

          Grok 4 heavy Thought for 4m 17s

          {"decoded_prefix": "nqxznhzhvqvvjddqiterrqdboctzzmoxmhyzlcfe", "last_10": "kfohgkrkoj", "vowel_counts": {"a": 7, "e": 18, "i": 7, "o": 12, "u": 6}}

          it did count another e, but that's a known point of failure for LLMs which i assume you put in intentionally.

          >Counting e's shows at least 10 more, so total e's are <at least> 17.

        • vbarrielle 8 hours ago

          It's cute that you think your high-school level cypher is probably not seen in the training set of one of the biggest LLMs in the world. Surely no one could have thought of such a cypher, let alone create exercises around it!

          No one should ever make claims such as "X is not in <LLM>'s training set". You don't know. Even if your idea is indeed original, nothing prevents someone from having though of it before, and published it. The history of science is full of simultaneous discoveries, and we're talking cutting-edge research.

        • skeezyboy 3 hours ago

          { "decoded_prefix": "nxcznchvhvvrddqinqtrrqdboctzzimxmhlyflcjfjapponydzwkxdtdehldmodizslzl", "last_10": "sxmb", "vowel_counts": { "a": 10, "e": 6, "i": 13, "o": 13, "u": 6 } }

          took about 2 seconds, must have had it cached

        • philipwhiuk 7 hours ago

          This is just Caesar cipher with extra steps.

        • incr_me 11 hours ago

          That's exactly the sort of thing a "stochastic parrot" would excel at. This could easily serve as a textbook example of the attention mechanism.