AI assisted search-based research works now

(simonwillison.net)

218 points | by simonw 21 hours ago ago

101 comments

  • otistravel 3 hours ago

    The most impressive demos of these tools always involve technical tasks where the user already knows enough to verify accuracy. But for the average person asking about health issues, legal questions, or historical facts? It's basically fancy snake oil - confident-sounding BS that people can't verify. The real breakthrough would be systems that are actually trustworthy without human verification, not slightly better BS generators. True AI research breakthroughs would admit uncertainty and provide citations for everything, not fake certainty like these tools do.

    • spongebobstoes an hour ago

      this remains true for pretty much all advice or information we receive. doctors, lawyers, accountants, teachers. there have been countless times that all of these professionals have given me bad advice or information

      sure, at least I have someone to blame in that case. but in my experience, the AI is at least as reliable as a person who I don't personally know

  • CSMastermind 12 hours ago

    The various deep research products don't work well for me. For example I asked these tools yesterday, "How many unique NFL players were on the roster for at least one regular season game during the 2024 season? I'd like the specific number not a general estimate."

    I as a human know how to find this information. The game day rosters for many NFL teams are available on many sites. It would be tedious but possible for me to find this number. It might take an hour of my time.

    But despite this being a relatively easy research task all of the deep research tools I tried (OpenAI, Google, and Perplexity) completely failed and just gave me a general estimate.

    Based on this article I tried that search just using o3 without deep research and it still failed miserably.

    • simonw 12 hours ago

      That is an excellent prompt to tuck away in your back pocket and try again future iterations of this technology. It's going to be an interesting milestone when or if any of these systems get good enough at comprehensive research to provide a correct answer.

      • minraws 5 hours ago

        If you keep the prompt the same at some point the data will appear in training set and we might have answer.

        So even though today it might be a good check it might not remain as such a good benchmark.

        I think we need a way to keep updating prompts without increasing complexity in someway to properly verify model improvements. ARC Deep Research anyone?

        • red_trumpet an hour ago

          Well, to test research capabilities, one could just adopt the year (2024->2025) in the prompt.

        • ljsprague 2 hours ago

          Wouldn't somebody need to answer the question below? Or do you mean the discussion of its weakness might somehow make it stronger the next time it's trained?

    • wontonaroo 4 hours ago

      I used Google AI Studio instead of Google Gemini App because it provides references to the search results.

      Google AI Studio gave me an exact answer of 2227 as a possible answer and linked to these comments because there is a comment further down which claims that is the exact answer. The comment was 2 hours old when I did the prompt.

      It also provided a code example of how to find it using the python nfl data library mentioned in one of the comments here.

      • patapong 18 minutes ago

        So the time to test data leakage from posting a question and answer to the internet, to LLMs having access to the answer is less than 2h... Does not bode well for the benchmarks of the future!

    • raybb 5 hours ago

      Similarly, I asked it a rather simple question of giving me a list of AC repair places near me with my numbers. Weirdly, Gemini repeated a bunch of them like 3 or 4 times, gave some completely wrong phone numbers, and found many places hours away but labeled them as in the neighboring city.

    • neom 11 hours ago

      Is it accurate that there are 544 rosters? If so, even at 2 minutes a roster isn't that days of work, even if you coded something? How would you go about completing this task in 1 hour as a human? (also chatgpt 4.1 gave me 2,503 and it said it used the NFL 2024 fact book)

      • dghlsakjg 9 hours ago

        If the rosters are in some sort of pretty easily parsed or scrapable format from the nfl, as sports stats typically are, this is just a matter of finding every unique name. This is something that I imagine would take less than an hour or two for a very beginner coder, and maybe a second or two for the code to actually run

        • krainboltgreene 9 hours ago

          FYI for readers: All the major leagues have a stats API, most are public, some are public and "undocumented" with tons of documentation by the community. It's quite a feat!

      • CSMastermind 6 hours ago

        544 rosters but half as many games (because the teams play each other).

        Technically I can probably do it in about 10 minutes because I've worked with these kind of stats before and know about packages that will get you this basically instantly (https://pypi.org/project/nfl-data-py/).

        It's exactly 4 lines of code to find the correct answer, which is 2,227.

        Assuming I didn't know about that package though I'd open a site like pro football reference up, middle click on each game to open the page in a new tab, click through the tabs, copy paste the rosters into sublime text, do some regex to get the names one per line, drop the new one per line list into sortmylist or a similar utility, dedupe it, and then paste it back into sublime text to get the line count.

        That would probably take me about an hour.

    • kenjackson 5 hours ago

      o3 deep research gave me an answer after I requested an exact answer again (it gave me an estimate first): 2147.

    • danielmarkbruce 12 hours ago

      This is just a bad match to the capabilities. What you are actually looking for is analysis, similar in nature to what a data scientist may do.

      The deep research capabilities are much better suited to more qualitative research / aggregation.

      • southernplaces7 2 hours ago

        Your logic is.... strange...

        Because it failed miserably at a very simple task of looking through some scattered charts, the human asking should blame themselves for this basic failure and trust it to do better with much harder and more specialized tasks?

      • johnnyanmac 5 hours ago

        If AI Can't look up and read a chart, why would I trust it with any real aggregation?

        • netghost 5 hours ago

          Because AI is weird and does some things really well, and some things poorly. The terrible/exciting/weird part is figuring out which is which.

      • pton_xd 12 hours ago

        > The deep research capabilities are much better suited to more qualitative research / aggregation.

        Unfortunately sentiment analysis like "Tell me how you feel about how many players the NFL has" is just way less useful than: "Tell me how many players the NFL has."

      • lucyjojo 11 hours ago

        First person that makes a good exact aggregation AI will make so much money...

        Precise aggregation is what so many juniors do in so many fields of work it's not even funny...

      • oytis 4 hours ago

        So it's not doing well in things that we can verify/measure, but sure it's doing much better in things we can't measure - except we can't measure them, so we have no idea about how well it is doing actually. The most impressive feature of LLMs stays its ability to impress.

    • paulsutter 10 hours ago

      I bet these models could create a python program that does this

      • Retric 10 hours ago

        Maybe eventually, but I bet it’s not going to work with less than 30 minutes of effort on your part.

        If “It might take an hour of my time.” to get the correct answer then there’s a lower bond for trying a shortcut that might not work.

  • simonw 20 hours ago

    I think it's important to keep tabs on things that LLM systems fail at (or don't do well enough on) and try to notice when their performance rises above that bar.

    Gemini 2.5 Pro and o3/o4-mini seem to have crossed a threshold for a bunch of things (at least for me) in the last few weeks.

    Tasteful, effective use of the search tool for o3/o4-mini is one of those. Being able to "reason" effectively over long context inputs (particularly useful for understanding and debugging larger volumes of code) is another.

    • skydhash 19 hours ago

      One issue I can find with this workflow is tunnel vision, making ill informed decision because of the lack of surrounding information. I often skim books because even if I don't retain the content, I can have a mental map that can help me find further information when I need them. I wouldn't try to construct a complete answer to a question with just this amount of information, but I will use that map to quickly locate the source and have more information to synthesize the answer.

      One could use the above workflow in the same way and argues that natural language search is more intuitive than keyword based search. But I don't think that brings any meaningful productivity improvement.

      > Being able to "reason" effectively over long context inputs (particularly useful for understanding and debugging larger volumes of code) is another.

      Any time I saw this "wish" pop up, my suggestion is to try a disassembler to reverse engineer some binary to really understand the problem of coming up with a theory of a program (based on Naur's definition). Individual statements are always clear (programming language are formal and have no ambiguity). The issue is grouping them, unambiguously define the semantic of these groups, and find the links between them, recursively.

      Once that's done, what you'll have is a domain. And you could have skipped the whole thing by just learning the domain from a domain expert. So the only reason to do this is because the code doesn't really implement the domain (bugs) or it's hidden purposefully. So the most productive workflow there is to learn the domain first to find discrepancy (first case) or focus yourself on the missing part (second case). In the first case, the easiest approach is writing tests, and the more complete one is to do a formal verification of the software.

  • csallen 42 minutes ago

    It's actually quite doable to build your own deep research agent. You just need a single prompt, a solid code loop to run it agentically, and some tools for it to call. I've been building a domain-specific deep research agent over the past few days for internal use, and I'm pretty impressed with how much better it is than any of the official deep search agents for my use case.

  • jsemrau 20 hours ago

    My main observation here is

    1. Technically it might be possible to search the Internet, but it might not surface correct and/or useful information.

    2. High-value information that would make a research report valuable is rarely public nor free. This holds especially true in capital-intensive or regulated industries.

    • simonw 20 hours ago

      I fully expect one of the AI-related business models going forward to be charging subscriptions for LLM search tool access to those kinds of archives.

      ChatGPT plus an extra $30/month for search access to a specific archive would make sense to me.

      • jsemrau 6 hours ago

        Then I'd rather see domain-specific agent-first data. I.e., not a simple API call but token->BM25->token

      • sshine 17 hours ago

        Kagi is $10/mo. for search and +$15/mo. for premium LLMs with agentic access to search.

        • AlotOfReading 14 hours ago

          What they're talking about is access to professional archives like EBSCOnet or Bloomberg, which usually don't sell to individuals in the first place and start at tens of thousands of dollars per seat for institutional access.

        • ac29 8 hours ago

          The $10 plan includes the LLM assistant a now as well (with a more limited selection of models than the $25 plan).

    • hadlock 14 hours ago

      o3/o4 seem to know how to search things like pypi, crates.io, pkg.go.dev etc and apply those changes on the first try. My application (running on an older version of code) had a breaking change to how the event controller functioned in the newer version, o3 looked at the documentation and rewrote it to use the new event controller. It used to be that you were trapped with the LLM being 3-8 months behind on package versions.

      • simonw 14 hours ago

        Huh, now I'm thinking that maybe a target for release notes should be to provide enough details that a good LLM can be used to apply fixes for any breaking changes.

        • rd 13 hours ago

          MCP maybe? A release notes MCP (maybe into ReadTheDocs or pypi) that understands upgrade instructions for every package.

    • TrackerFF 14 hours ago

      I'm not a researcher, but don't most researchers these days also upload their work to arXiv?

      Sure, it's not a journal - but in some fields (Machine Learning, Math) it seems like everyone also uploads their stuff there. So if the models can crawls sites like arXiv, at least there's some decent stuff to be found.

      • jsemrau 6 hours ago

        Proper research, especially the one contributed to conferences, is hard to get by and is usually managed by the conference organizers. Arxiv has some, but it's limited.

        It would be great if for a DeepSearch tool for ML, I could just use Arxiv as a source and have the Agent search this. But so far I have not found a working Arxiv tool that does this well.

      • levocardia 9 hours ago

        Not outside of ML, physics, and math. Preprints are extremely rare in many (dare I say most) scientific fields, and of course many times you are interested in not the cutting edge work, but the foundational work in a field from the 60s, 70s, or 80s, all of which is locked behind a paywall. Or at least it's supposed to be, and corporate LLMs are not "allowed" to go poking around on sketchy Russian website for non-paywalled versions.

  • sshine 20 hours ago

    The article doesn’t mention Kagi: The Assistant, a search-powered LLM frontend that came out of closed beta around the beginning of the year, and got included in all paid plans since yesterday.

    It really is a game changer when the search engine

    I find that an AI performing multiple searches on variations of keywords, and aggregating the top results across keywords is more extensive than most people, myself included, would do.

    I had luck once asking what its search queries were. It usually provides the references.

    • simonw 20 hours ago

      I haven't tried Kagi's product here yet. Do you know which LLM it uses under the hood?

      Edit: from https://help.kagi.com/kagi/ai/assistant.html it looks like the answer is "all of them":

      > Access to the latest and most performant large language models from OpenAI, Anthropic, Meta, Google, Mistral, Amazon, Alibaba and DeepSeek

      • dcre 20 hours ago

        Yep, regular paid Kagi sub comes with cheap models for free: GPT-4o-mini, Gemini 2.5 Flash, etc. If you pay extra you can get the SOTA models, though IMO flash is good enough for most stuff if the search result context is good.

  • intended 20 hours ago

    I find that these conversations on HN end up covering similar positions constantly.

    I believe that most positions are resolved if

    1) you accept that these are fundamentally narrative tools. They build stories, In whatever style you wish. Stories of code, stories of project reports. Stories or conversations.

    2) this is balanced by the idea that the core of everything in our shared information economy is Verification.

    The reason experts get use out of these tools, is because they can verify when the output is close enough to be indistinguishable from expert effort.

    Domain experts also do another level of verification (hopefully) which is to check if the generated content computes correctly as a result - based on their mental model of their domain.

    I would predict that that LLMs are deadly in the hands of people who can’t gauge the output, and will end up driving themselves off of a cliff, while experts will be able to use it effectively on tasks where verification of the output has a comparative effort advantage, over the task of creating the output.

    • gh0stcat 16 hours ago

      You've perfectly captured my experience as well, I typically only trust it and have good experiences with LLMs when I have enough domain expertise to get to at least a 95% confidence the output is correct. (Specific to my domain of work, I don't always need "perfect"). I also can mostly use it as a first pass for getting the idea of where to begin research, after that I lose confidence that the more detailed and advanced content it is giving me is accurate. There is a gray area though where a domain expert might have a false sense of confidence, and over time experience "Skill Drift", where they lose expertise because they are only ever verifying a lossy compression of information, rather than re-setting their context with real world information. I am mostly concerned with that last bit.

    • ilrwbwrkhv 11 hours ago

      Yup succinct summarization of the current state. This works across domains from research to software engineering.

  • jonas_b 2 hours ago

    A common google searching thing I counter have is something like this:

    I need to get from A to B via C via public transport in a big metropolis.

    Now C could be one of say 5 different locations of a bank branch, electronics retailer, blood test lab or whatever, so there's multiple ways of going about this.

    I would like a chatbot solution that compares all the different options and lays them out ranked by time from A to B. Is this doable today?

  • saulpw 20 hours ago

    I tried it recently. I asked for videochat services like the one I use (WB) with 2 specific features that the most commonly used services don't have. It asked some clarifying questions and seemed to understand the mission, then went off for 10 minutes after which it returned 5 results in a table.

    The first result was WB, which I gave to it as the first example and am already using. Results 2 and 3 were the mainstream services which it helpfully marked in the table as not having the features I need. Result 4 looked promising but was discontinued 3 years ago. Result 5 was an actual option which I'm trying out (but may not work for other reasons).

    So, 1/5 usable results. That was mildly helpful I guess, but it appeared a lot more helpful on the surface than it was. And I don't seem to have the ability to say "nice try but dig deeper".

    • Gracana 16 hours ago

      You can tell it to try again. It took me a couple rounds with the tool before I noticed that your conversation after the initial research isn't limited to just chatting: if you select the "deep research" button on your message, it will run the search process in its response.

    • simonw 20 hours ago

      That sounds like a Deep Research query, was that with OpenAI or Gemini?

      • saulpw 19 hours ago

        This was OpenAI.

  • blackhaz 3 hours ago

    This is surprising. o3 produces incredible amount of hallucinations for me, and there are lots of reddit threads about it. I've had to roll back to another model because it just swamps everything in made up facts. But sometimes it is frighteningly smart. Reading its output sometimes feels like I'm missing IQ points.

  • in_ab 2 hours ago

    Claude doesn't seem to have a built in search tool but I tried this with a MCP server to search google and it gives similar results.

  • btbuildem 20 hours ago

    It's a relevant question about the economic model for the web. On one hand, the replacement of search with a LLM-based approach threatens the existing, advertising-based model. On the other hand, the advertising model has produced so much harm: literally irreparable damage to attention spans, outrage-driven "engagement", and the general enshittification of the internet to mention just a few. I find it a bit hard to imagine whatever succeeds it will be worse for us collectively.

    My question is, how to reproduce this level of functionality locally, in a "home lab" type setting. I fully expect the various AI companies to follow the exact same business model as any other VC-funded tech outfit: free service (you're the product) -> paid service (you're still the product) -> paid service with advertising baked in (now you're unabashedly the product).

    I fear that with LLM-based offerings, the advertising will be increasingly inseparable, and eventually undetectable, from the actual useful information we seek. I'd like to get a "clean" capsule of the world's compendium of knowledge with this amazing ability to self-reason, before it's truly corrupted.

    • fzzzy 20 hours ago

      You need a copy of r1 and enough ram to run it, and a web searching tool, or a rag database with your personal data store.

      • btbuildem 8 hours ago

        R1 would be the reasoning model - as in, the initial part of the output being the "train of thought" revealed before the "final answer" is provided. I was able to deploy a heavily quantized version of that locally, and run it with RAG (Open Webui in this instance) -- with web search enabled, sure, but it's still a far cry from an actual "research" model that know when and how to seek extra data / information.

  • softwaredoug 16 hours ago

    I wonder when Google search will let me "chat" with the search results. I often want to ask the AI Overview follow up questions.

    I secondarily wonder how an LLM solves the trust problem in Web search. What's traditionally solved (and now gamed) through PageRank. It doesn't seem ChatGPT is easily fooled by Spam as direct search.

    How much is Bing (or whatever the search engine is) getting better? vs how much are LLMs better at knowing what a good result is for a query?

    Or perhaps it has to do with the richer questions that get asked to chat vs search?

    • vunderba 15 hours ago

      > I wonder when Google search will let me "chat" with the search results.

      You don't hear a lot of buzz around them, but thats kind of what Perplexity lets you do. (Possibly phind too but it's been a while since I used them).

    • KTibow 15 hours ago

      When AI Overview was called Search Generative Experience, you could do that. You can do that again now if you have access to AI Mode.

    • dingnuts 15 hours ago

      >I wonder when Google search will let me "chat" with the search results

      Kagi has this already, it's great. Choose a result, click the three-dot menu, choose "Ask questions about this page." I love to do this with hosted man pages to discover ways to combine the available flags (and to discover what is there)

      I find most code LLMs write to be subpar but Kagi can definitely write a better ffmpeg line than I can when I use this approach

  • xp84 13 hours ago

    From article:

    > “Google is still showing slop for Encanto 2!” (Link is provided)

    I believe quite strongly that Google is making a serious misstep in this area, the “supposed answer text pinned at the top above the actual search results.”

    For years they showed something in this area which was directly quoted from what I assume was a shortlist of non-BS sites so users were conditioned for years that if they just wanted a simple answer like when a certain movie came out or if a certain show had been canceled or something, you may as well trust it.

    Now it seems like they have given over that previous real estate to a far less reliable feature, which simply feeds any old garbage it finds anywhere into a credulous LLM and takes whatever pops out. 90% of people that I witness using Google today simply read that text and never click any results.

    As a result, Google is now pretty much always even less accurate at the job of answering questions than if you posed that same question to ChatGPT, because GPT seems to be drawing from its overall weights which tend toward basic reality, whereas Google’s “Answer” seems to be summarizing a random 1-5 articles from the Spam Web, with zero discrimination between fact, satire, fiction, and propaganda. How can they keep doing this and not expect it to go badly?

    • ljsprague 2 hours ago

      I have stopped using Google when I have a random fact I need answered. Faster to ask ChatGPT. I trust it enough now.

  • 63 13 hours ago

    One downside I found is that the llm cannot change its initial prompt until it's done thinking. I used deep research to compare counseling centers for me but of course when it encounters some factor I hadn't thought of (e.g. the counselors here fit the criteria perfectly but none accept my insurance), it doesn't know that it ought to skip that site entirely. Really this is a critique of the deep-research approach rather than search in general, but I imagine it can still play out on smaller scales. Often, searching for information is a dynamic process involving the discovery of unknown unknowns and adjustment based on that, but ai isn't great at abstract goals or stopping to ask clarifying questions before resuming. Ultimately, the report I got wasn't useless, but it mostly just regurgitated the top 3 google results. I got much better recommendations by reaching out to a friend who works in the field.

  • sublimefire 10 hours ago

    I do prefer tools like GPT researcher where you are in control over sources and search engines. Sometimes you just need to use arxiv, sometimes mix research with the docs you have. Sometimes you want to use different models. I believe the future is in choosing what you need for the specific task at that moment, eg 3d model generation mixed with something else, and this all requires some sort of new “OS” level application to run from.

    Individual model vendors cannot do such a product as they are biased towards their own model, they would not allow you to choose models from competitors.

  • baq 19 hours ago

    > I can feel my usage of Google search taking a nosedive already.

    Conveniently Gemini is the best frontier model for everything else, they’re very interested and well positioned (if not best?) to also be the best in deep research. Let’s check back in 3-6 months.

    • jillesvangurp 16 hours ago

      Google has two advantages:

      1) Their AI models aren't half bad. Gemini 2.5 seems to be doing quite well relative to some competitors.

      2) They know how to scale this stuff. They have their own hardware, lots of data, etc.

      Scaling is of course the hard part. Doing things at Google scale means doing it well while still making a profit. Most AI companies are just converting VC cash into GPUs and energy. VC subsidized AI is nice at a small scale but cripplingly expensive at a larger scale. Google can't do this; they are too large for that. But they are vertically integrated, build their own data centers, with their own TPUs, etc. So, once this starts happening at their scale, they might just have an advantage.

      A lot of what we are seeing is them learning to walk before they start running faster. Most of the world has no clue what perlexity is or any notion of the pros and cons of claude 3.7 sonnet vs. o4 mini high. None of that stuff matters long term. What matters is who can do this stuff well enough for billions of people.

      So, I wouldn't count them out. But none of this stuff guarantees success either, of course.

    • throwup238 18 hours ago

      IMO they’re already the best. Not only is the rate limit much higher (20/day instead of OpenAI’s 10/month) but Gemini is capable of looking at far more sources, on the order of 10x.

      I just had a research report last night that looked at 400 sources when I asked it to help identify a first edition Origin of Species (it did a great job too, correctly explaining how to identify a true first edition from chimeral ones).

  • energy123 19 hours ago

      > The user-facing Google Gemini app can search too, but it doesn’t show me what it’s searching for. 
    
    Gemini 2.5 Pro is also capable of search as part of its chain of thought but it needs light prodding to show URLs, but it'll do so and is good at it.

    Unrelated point, but I'm going to keep saying this anywhere Google engineers may be reading, the main problem with Gemini is their horrendous web app riddled with 5 annoying bugs that I identified as a casual user after a week. I assume it's in such a bad state because they don't actually use the app and they use the API, but come on. You solved the hard problem of making the world's best overall model but are squandering it on the world's worst user interface.

    • loufe 19 hours ago

      There must be some form of memory leak in AI Studio as I'll have to close and open a new tab after about 2 hours as it slowly grinds my slower computers to a halt. Its ability to create a markdown file without escaping the markdown itself (included code snippets) is definitely my first suggestion for them to fix.

      It's a great tool, but sometimes frustrating.

  • jeffbee 9 hours ago

    The Deep Research stuff is crazy good. It solves the issue that I can often no longer find articles that I know are out there. Example: yesterday I was holding forth on the socials about how 25 years ago my local government did such and such thing to screw up an apartment development at the site of an old movie theater, but I couldn't think of the names of any of the principals. After Googling for a bit I used a Deep Research bot to chase it down for me, and while it was doing that I made a sandwich. When I came back it had compiled a bunch of contemporaneous news articles from really obscure bloggers, plus allusions to public records it couldn't access but was confident existed, that I later found using the URLs and suggested search texts.

  • swyx 16 hours ago

    > Deep Research, from three different vendors

    dont forget Xai grok!

    • M4v3R 16 hours ago

      Which, at least in my experience is surprisingly good while being much faster than others.

    • ilrwbwrkhv 9 hours ago

      Horrible compared to sota. I only find it being mentioned by random ai influencers who are a waste of air and who live on twitter.

    • fudged71 12 hours ago

      you.com is surprisingly good for this as well (I like the corporate report PDF export)

  • Havoc 10 hours ago

    Are any of the Deep Research tools pure api cost? Or all monthly sub?

    • simonw 10 hours ago

      I think the Gemini one may still be available for free.

  • qwertox 20 hours ago

    I feel like the benefit which AI gives us programmers is limited. They can be extremely advanced, accelerative and helpful assistants, but we're limited to just that: architecting and developing software.

    Biologists, mathematicians, physicists, philosophers and the like seem to have an open-ended benefit from the research which AI is now starting to enable. I kind of envy them.

    Unless one moves into AI research?

    • bluefirebrand 20 hours ago

      I don't think AI is trustworthy or accurate enough to be valuable for anyone trying to do real science

      That doesn't mean they won't try though. I think the replication crisis has illustrated how many researchers actually care about correctness versus just publishing papers

      • simonw 20 hours ago

        If you're a skilled researcher I expect you should be able to get great results out of unreliable AI assistants already.

        Scientists are meant to be good at verifying and double-checking results - similar to how journalists have to learn to derive the truth from unreliable sources.

        These are skills that turn out to be crucial when working with LLMs.

        • bluefirebrand 19 hours ago

          > Scientists are meant to be good at verifying and double-checking results

          Verifying and double-checking results requires replicating experiments, doesn't it?

          > similar to how journalists have to learn to derive the truth from unreliable sources

          I think maybe you are giving journalists too much credit here, or you have a very low standard for "truth"

          You cannot, no matter how good you are, derive truth from faulty data

          • simonw 18 hours ago

            Don't make the mistake of assuming all journalists are the same. There's a big difference between an investigative reporter at a respected publication and someone who gets paid to write clickbait.

            Figuring out that the data is faulty is part of research.

            • bluefirebrand 18 hours ago

              Figuring out that data is faulty is one thing

              There is still no possible way that a journalist can arrive at correct information, no matter how good, if they only have faulty data to go with

              • simonw 17 hours ago

                That's what (good) journalism is: the craft of hunting down sources of information, figuring out how accurate and reliable they are and piecing tougher as close to the truth as you can get.

                A friend of mine is an investigative reporter for a major publication. They once told me that an effective trick for figuring out what's happening in a political story is to play different sources off against each other - tell one source snippets of information you've got from another source to see if they'll rebut or support it, or if they'll leak you a new detail because what you've got already makes them look bad.

                Obviously these sources are all inherently biased and flawed! They'll lie to you because they have an agenda. Your job is to figure out that agenda and figure out which bits are true.

                The best way to confirm a fact is to hear about it from multiple sources who don't know who else you are talking to.

                That's part of how the human intelligence side of journalism works. This is why I think journalists are particularly well suited to dealing with LLMs - human sources lie and mislead and hallucinate to them all the time already. They know how to get (as close as possible) to the truth.

        • barbazoo 19 hours ago

          Same with using AI for coding. I can’t imagine someone having the expectation to use the LLM output verbatim but maybe I’m just not good enough at prompting.

          • simonw 18 hours ago

            Using AI for coding effectively involves getting very good at testing (both manual and automated) and code review: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/...

            • bluefirebrand 17 hours ago

              Manual testing, automated testing, and code review

              All three of those things are things that software engineers rather reliably are bad at and cut corners on, because they are the least engaging and least interesting part of the job of building software

              • simonw 17 hours ago

                Yep. Engineers who aren't willing to invest in those skills will have limited success with AI-assisted development.

                I've seen a few people state that they don't like using LLMs because it takes away the fun part (writing the code) and leaves them with the bits they don't enjoy.

                • bluefirebrand 16 hours ago

                  > Engineers who aren't willing to invest in those skills

                  Are bad engineers

                  > AI-assisted development

                  Are also bad engineers

    • parodysbird 19 hours ago

      Biologists, mathematicians, physicists, and philosophers are already the experts who produce the text in their domain that the LLMs might have been trained on...

    • twic 11 hours ago

      Until AI can work a micropipette, it's going to be of fairly marginal use to biologists.

  • oulipo 19 hours ago

    The main "real-world" use cases for AI use for now have been:

    - shooting buildings in Gaza https://apnews.com/article/israel-palestinians-ai-weapons-43...

    - compiling a list of information on Government workers in US https://www.msn.com/en-us/news/politics/elon-musk-s-doge-usi...

    - creating a few losy music videos

    I'd argue we'd be better off SLOWING DOWN with that shit

    • sandspar 7 hours ago

      You seem ideology motivated instead of truth motivated which makes you untrustworthy.

      • oulipo 3 hours ago

        So give some citations of other notable uses?

    • esafak 16 hours ago

      Programming is not real world?

      • oulipo 3 hours ago

        I said "the main use cases", not "the little toys to distract and amuse engineers while the ruin the environment with CO2 emissions"

      • das_keyboard 13 hours ago

        Yeah right. We also got "vibe coding" out of it.