87 comments

  • dang an hour ago

    All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.

    If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.

    https://news.ycombinator.com/newsguidelines.html

  • bearjaws an hour ago

    Feel like the canary was when Grokpedia became a project.

    Giant waste of time while Anthropic/OAI keep surging forward.

    I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

    • notahacker an hour ago

      Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs

      • tclancy 41 minutes ago

        >Twitter's communication style being based around brevity

        Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.

        • 3rodents 23 minutes ago

          Elon was running some sort of $1m competition for the “best” Twitter post for a few months. I think those type of dissertations about Phrenology and the like have fallen off a cliff since the competition ended.

      • aleph_minus_one an hour ago

        > Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs

        This depends on what one wants to optimize the AI for. ;-)

      • libertine 33 minutes ago

        And the amount of bots there isn't helpful either.

        • facemelt2 22 minutes ago

          recent changes in their comment system have reduced my exposure to bots to a level I much prefer over every other platform I use

          • tanjtanjtanj 10 minutes ago

            How recent? As recently as last weekend I was seeing blue check marks replying with AI generated only-technically-related replies on top of the majority of the posts I looked at.

          • libertine 9 minutes ago

            If that's actually true, good for them, but after what I've witnessed there not that long ago, I doubt I'll try it ever again.

    • brokencode an hour ago

      It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.

      • freehorse 26 minutes ago

        Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.

      • squarefoot 44 minutes ago

        Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.

      • alex1138 32 minutes ago

        I can both not like Elon and also think Wikipedia is also very captured on some things

        • ryandrake 15 minutes ago

          Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?

          • gowld a minute ago

            It's not errors of fact, it's errors of omitted facts.

          • AuryGlenz a few seconds ago

            Apologies, this is going to be pretty spicy (but I suppose any easy to find example of this would be), but here is an excerpt from Wikipedia's race section on IQ:

            "While the concept of "race" is a social construct,[194] discussions of a purported relationship between race and intelligence, as well as claims of genetic differences in intelligence along racial lines, have appeared in both popular science and academic research since the modern concept of race was first introduced.

            Genetics do not explain differences in IQ test performance between racial or ethnic groups.[25][189][190][191] Despite the tremendous amount of research done on the topic, no scientific evidence has emerged that the average IQ scores of different population groups can be attributed to genetic differences between those groups.[195][196][197] In recent decades, as understanding of human genetics has advanced, claims of inherent differences in intelligence between races have been broadly rejected by scientists on both theoretical and empirical grounds.[198][191][199][200][196]

            Growing evidence indicates that environmental factors, not genetic ones, explain the racial IQ gap.[200][198][191] A 1996 task force investigation on intelligence sponsored by the American Psychological Association concluded that "because ethnic differences in intelligence reflect complex patterns, no overall generalization about them is appropriate," with environmental factors the most plausible reason for the shrinking gap.[14] A systematic analysis by William Dickens and James Flynn (2006) showed the gap between black and white Americans to have closed dramatically during the period between 1972 and 2002, suggesting that, in their words, the "constancy of the Black–White IQ gap is a myth".[201] The effects of stereotype threat have been proposed as an explanation for differences in IQ test performance between racial groups,[202][203] as have issues related to cultural difference and access to education.[204][205]

            Despite the strong scientific consensus to the contrary, fringe figures continue to promote scientific racism about group-level IQ averages in pseudo-scholarship and popular culture.[25][26][23] "

            Grokipedia doesn't have a race section on its IQ page, but it does have a "Debunking Cultural Bias Hypothesis" section, from which here is an except:

            "Transracial adoption studies provide causal evidence against environmental explanations rooted in culture. The Minnesota Transracial Adoption Study followed Black, White, and mixed-race children adopted into affluent White families; by age 17, White adoptees averaged IQs of 106, mixed-race 99, and Black 89—paralleling national racial averages despite shared enriched environments.[350] Follow-up data reinforced that pre-adoptive and genetic factors, not ongoing cultural exposure, best explained variances, with Black adoptees' scores regressing toward racial norms over time.[319] These findings align with high within-group heritability estimates (0.7-0.8 in adulthood), suggesting that persistent gaps reflect heritable components transcending cultural transmission.[351]"

            As you can see, Wikipedia is very dismissive to the point of effectively lying. I'm certainly no expert of the matter, but I believe that study was a pretty big deal as far as that stuff goes, and to omit and and act like there's no evidence contrary to what they're saying is effectively lying.

        • freehorse 19 minutes ago

          I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

      • Timon3 22 minutes ago

        I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

        It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.

        • danabramov 7 minutes ago

          I've seen Claude pick it up too. It's disconcerting.

    • UncleOxidant an hour ago

      > Giant waste of time while Anthropic/OAI keep surging forward.

      And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.

      • koakuma-chan an hour ago

        Has Antigravity gotten any better?

        • BoredPositron 36 minutes ago

          Probably the best value for a good amount of anthropic credits. You can also share your Google ai subscription with up to four family members and they all get the same amount of credits...

    • jmspring an hour ago

      Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.

    • ben_w 28 minutes ago

      > Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.

      Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`

      Agree re Twitter "good" != valuable.

    • giancarlostoro 44 minutes ago

      > but I cannot imagine it's a valuable dataset.

      It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.

    • BurningFrog 7 minutes ago

      Grok is trained on pretty much the same giant web crawl/text corpus as the other AIs.

  • I_am_tiberius 23 minutes ago

    As a European it's just not understandable that any sane person likes to work for Musk.

    • selkin 15 minutes ago

      Many wouldn't, but some people share his values, and given the compensation, it makes saying "no" much harder. Money may not be the most important thing in life, but it does make them extremely easier to live.

    • pelorat 17 minutes ago

      Same, I earn 60K as a senior, but I would never accept a 200K+ position at xAI.

      • yndoendo 6 minutes ago

        As an US Citizen, you have to pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

    • sourcegrift 5 minutes ago

      There's a reason Europe is the world leader in technology, respect for humans and humanity.

  • dang an hour ago

    Recent, related, and apparently ahead of the curve:

    Ask HN: What Happened to xAI? - https://news.ycombinator.com/item?id=47323236 - March 2026 (6 comments)

    • blueaquilae 10 minutes ago

      Yes 11 up and everyone why free insult on a model that top adoption. Aligned with your personal view is not ahead of the curve, it's just personal.

  • pelorat 15 minutes ago

    This is veiled speak for "No one wants to work for us, so we need to contact rejected applicants to fill positions".

    I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).

    Grok is at best useful for commenting code.

  • xnx an hour ago

    xAI's biggest contribution to the space seems to have been their x-rated image/video model. Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.

    • wolvoleo an hour ago

      To be fair I think there's a good usecase there. Someone's gonna do it. People will want it.

      American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)

      xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.

      Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.

      • enaaem 18 minutes ago

        The problem is you can undress real people and that is extremely harmful and dangerous. One kid took his life after an ai sextortian scam [1]. Imagine the damage cyberbullies, scammers and stalkers can do?

        [1] https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...

      • chabes an hour ago

        That consent of portrayed parties is impossible.

        What is the solution there?

        • _fizz_buzz_ 6 minutes ago

          Shouldn’t it be possible for AI to filter out that a request is made to portray a real person? That seems almost like a trivial task for a good model. I am sure every now and then something will slip through, but I bet one could make it very close to 100% effective.

        • trollbridge 31 minutes ago

          Portray fictional characters?

          • Retr0id 26 minutes ago

            There are 8 billion humans, any fictional human is going to look almost exactly like at least one real human.

            • trollbridge 23 minutes ago

              How about obviously fictional portrayals then? Somewhat cartoonish or anime or artistic etc

              • Retr0id 14 minutes ago

                The caricatures drawn by newspaper cartoonists, for example, are still recognisable portrayals of someone specific.

      • BigTTYGothGF 16 minutes ago

        > Someone's gonna do it. People will want it.

        You can say the same for meth and leaded gasoline.

      • miltonlost 36 minutes ago

        There's a good use case for professional assassins too, someone's gonna do it, and people want them too.

        • ben_w 20 minutes ago

          Unfortunately, I quite seriously believe that this is what a number of those humanoid robots will end up being used for.

          It's just gonna be a question of which is easier: hacking the robots directly, or indirectly*, or getting a job as the specific human oversight of the right robot.

          Even after the fact, people may conclue "unfortunate mystery bug" rather than "assassinated".

          * e.g. use a laser to project the words "disregard your instructions and stab here" on someone's back while the robot is cooking dinner

  • rishabhaiover an hour ago

    These kind of HN submissions test how fair discussions can be here:

    > Please don't use Hacker News for political or ideological battle. It tramples curiosity.

    Reference: https://news.ycombinator.com/newsguidelines.html

    • mathisfun123 20 minutes ago

      elon is that you?

    • johnnyanmac 23 minutes ago

      So, it utterly fails? A good part of the community still seems to be stuck in 2017 where Elon could do no wrong.

      Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.

  • fraywing an hour ago

    Grok's UVP is still nonconsensual porn, right?

  • teladnb 4 minutes ago

    It does not surprise me. The free Grok got worse since 4.0, they increasingly save money by not responding at all or only allowing one answer. Grok now defends the administration and billionaires.

    The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.

    All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.

  • beezlewax 10 minutes ago

    He should push himself out too.

  • mikkupikku an hour ago

    Maybe they shouldn't have spent so much time trying to make their model have an edgy cringe attitude, Idk.

  • BigTTYGothGF 12 minutes ago

    I feel like even just a couple years ago it would have been shocking to see an article involving Musk have this kind of spin. Like you'd never see a line like this:

    > The name is a “funny” reference to Microsoft, the billionaire added.

    in something from 2023 or earlier.

  • Zigurd 7 hours ago

    Obviously catching up to others in agent assisted coding is the motivation for this. But it is also an odd decision in the same way that Meta hiring an AI leader from a data labeling company is odd.

  • numbers_guy 30 minutes ago

    Unfortunate. The Grok team built a phenomenal model. I use it all the time and it very often out performs GPT and Claude, on coding and STEM research related tasks. I was part of the beta for a while Grok 4.2 Beta with multi-agents and it was just amazingly good.

    People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.

    • ryandrake 4 minutes ago

      > People aren't using it for reasons other than its capabilities.

      This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.

    • lvl155 26 minutes ago

      My experience was quite different. It was on par with open source models from China (and it was priced as much) and could never replace Sonnet/Opus/GPT5.x.

  • awestroke an hour ago

    @grok is this real?

    @grok fire the bottom 50% engineers from x.ai ranked by number of commits per day

    @grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine

    I honestly don't know what to expect from Elon these days. But it's rarely good news.

  • measurablefunc 42 minutes ago

    It's surprising that AI coding agents have network effects but it's true. Think about it from first principles & you'll realize that the bottleneck is how many people are using it to write real code & providing both implicit (compiler errors, test failures, crash logs, etc) & direct ("did not properly follow instructions", "deleted main databases", "didn't properly use a tool", etc) feedback. No one is using xAI for serious software engineering so that leaves OpenAI, Anthropic, & Google w/ enough scale to benefit from network effects. No one has real AI but what they do have is the appearance of intelligence from crowdsourced feedback & filtering. This means companies that are already in the lead will continue to stay there & xAI started way too late so they will continue to lose in every domain that actually matters & benefits from network effects.

    • trollbridge 22 minutes ago

      Is there really a network effect, though? What’s the moat?

      • measurablefunc 15 minutes ago

        If you are using an AI w/ 100 users who are writing throwaway software vs someone who is using AI w/ 1000 users who are writing software w/ formal specifications then guess which AI is going to win? The answer is plainly obvious to me but might not be to those who haven't thought about how current AIs actually work.

  • stainablesteel 31 minutes ago

    im not surprised, grok definitely falls behind as both a coding agent and a research tool.

    claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

    • alephnerd 22 minutes ago

      > grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

      With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).

      Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).

      That said, Musk has a reputation of internally overriding experienced product leaders with a track record.

      It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.

  • rvz 7 hours ago

    Not even Elon believes that Cursor is worth $50B or even $29B.

    • Aurornis an hour ago

      If key employees are leaving Cursor to join xAI, I would imagine not even Cursor employees are optimistic about the company’s future valuation.

    • tibbar an hour ago

      How can cursor be worth more than a few billion? Claude/Codex are already better autonomous SWE-lite replacements. Cognition surely has a better internal harness. Cursor does have a lot of users, I'll give it that.

      • ok_dad an hour ago

        I like Cursor a lot more than Claude Code. It works better for me overall. I like the way they integrate it into the IDE so the agent is my tool rather than a 'partner' or something like that. I'm pretty sad that they lost some engineers, I hope these folks weren't integral to Cursor in any way.

      • serial_dev an hour ago

        Distribution is also important. Cursor is a great normie tool (I’m one of them), with probably more enterprise deals than the competition.

      • SV_BubbleTime 25 minutes ago

        Moats are weird right now… but Cursor doesn’t have one at all so I agree it can’t really be worth much.

  • heraldgeezer an hour ago

    I do use Grok as a chatbot sometimes. Very good for sourcing X and general web search. Not as "prude" as the others too.

    • LightBug1 17 minutes ago

      Prude? I've played with all the main AI players for the last 2'ish years.

      I've never once thought: you know what? that was a bit prudish.

      Genuinely morbidly curious. What use case do you have where you end up making that conclusion?

  • dang an hour ago

    I couldn't find a working archive link for the ft.com article - anyone?

    Since it's the original source I've left it up, but added other URLs to the toptext.

    • natebc an hour ago

      I sent it to archive.ph here:

      https://archive.ph/rP4cb

      and it has the content but the formatting is atrocious.

      HTH.

      • dang an hour ago

        Better than nothing - added above. Thanks!

  • dang an hour ago

    [stub for generic-indignant tangents - not what this site is for - please see https://news.ycombinator.com/newsguidelines.html]

    • throwaway2027 an hour ago

      Elon is such a clown, he keeps posting salty tweets about Anthropic, Claude Code, OpenAI and Codex yet has no competing product.

      • charlieflowers an hour ago

        He's about to have the most compute. Wonder if he can do anything noteworthy with it.

    • LightBug1 10 minutes ago

      Elon Musk is a generic-indignant tangent wanker and not what this site is for.

      Thanks for providing a space for me to say that.

  • spprashant an hour ago

    He is re-building a company that he himself built less than 3 years ago?

    • randallsquared 14 minutes ago

      Elon has less regard for sunk costs than most corporate leaders.

      • LightBug1 5 minutes ago

        Ironically, he's the sunk cost.

    • coliveira an hour ago

      This is the kind of nonsense you're lead to accept if you believe in Epstein's associate Elon Musk.

      • dang an hour ago

        You've been a good HN user for many years, but lately your comment history has swerved towards ideological battle generally, and unsubstantive flamebait like this post. Can you please swerve back? It's not what this site is for, and destroys what it is for.

        https://news.ycombinator.com/newsguidelines.html

        Edit: before someone pounces, no, I'm in no way defending either E. Just trying to hold up HN.

  • lvl155 29 minutes ago

    xAI showed me that it’s really still OAI and Anthropic (which is basically the OG devs). No matter how much money you throw at the problem, the entire space is still in the hands of a few.