8 comments

  • jeanettesherman 2 hours ago

    Using LLMs for everything is going to be seen as a big fad in a few years. First we try them for everything, then we find what use cases actually make sense, then we scale back. Woe betide our 401(k)s when it happens, though.

    • computably 7 minutes ago

      > Woe betide our 401(k)s when it happens, though.

      The stock market crashes once in a while. Shit happens. The long-term outlook is unlikely to change nearly as much, unless you think there will be systemic macroeconomic changes.

    • drBonkers an hour ago

      This is a concise statement of what I've tried to articulate by analogizing it to railroad infra buildout.

      What applications do you think make the most sense so far?

    • simianwords an hour ago

      The paper did not compare against LLMs though.

  • glitchc an hour ago

    If there's one problem that LLMs have solved, it's language. While an LLM may hallucinate, it does so in grammatically correct English sentences. Additionally, even the local version of gemma-4-26B can seamlessly switch between languages in the midst of a conversation while maintaining context. That's perhaps the most exciting part for me: We have a bonafide universal translator (that's Star Trek territory) and people seem more focused on its factual accuracy.

  • E-Reverance an hour ago

    I might be misinterpreting but the LUAR model (which is a transformer) seems to do decently well

    https://www.nature.com/articles/s41599-025-06340-3/figures/2

  • z3c0 2 hours ago

    Ha! To think that we're finally back to asking ourselves why we are using generative models for categorization and extraction. I wonder how much money has collectively been wasted by companies wittling away at square pegs.

  • simianwords an hour ago

    It should be obvious that LLMs would be able to beat this with ease. Not sure why this paper deliberately skipped comparing to current LLMs

    Example of LLMs doing well in similar tasks: https://arxiv.org/abs/2602.16800