Throw more AI at your problems

(frontierai.substack.com)

41 points | by vsreekanti 3 hours ago ago

18 comments

  • crooked-v an hour ago

    With the current state of "AI", this strikes me as a "I had a problem and used AI, now I have two problems" kind of situation in most cases.

    • zonethundery 6 minutes ago

      It is too bad Erik Naggum did not live to see the AI era.

    • ToucanLoucan an hour ago

      All the snark contained within aside, I'm reminded of that ranting blog post from the person sick of AI that made the rounds a little ways back, which had one huge, cogent point within: that the same companies that can barely manage to ship and maintain their current software are not magically going to overcome that organizational problem set by virtue of using LLMs. Once they add that in, then they're just going to have late-released, poorly made software that happens to have an LLM in it somewhere.

  • Stoids an hour ago

    We aren’t good at creating software systems from reliable and knowable components. A bit skeptical that the future of software is making a Rube Goldberg machine of black box inter-LLM communication.

    • TZubiri an hour ago

      I'm pretty sure this is a satire post

  • l5870uoo9y an hour ago

    RAG doesn’t necessarily give the best results. Essentially it is a technically elegant way to semantic context to the prompt (for many use cases it is over-engineered). I used to offer RAG SQL query generations on SQLAI.ai and while I might introduce it again, for most use cases it was overkill and even made working with the SQL generator unpredictable.

    Instead I implemented low tech “RAG” or “data source rules”. It’s a list of general rules you can attach to a particular data source (ie database). Rules are included in the generations and work great. Examples are “Wrap tables and columns in quotes” or “Limit results to 100”. It’s simple and effective - I can execute the generate SQL again my DB for insights.

    • simonw 31 minutes ago

      What do you mean by "RAG SQL query generations"? Were you searching for example queries similar to the questions the user's asked and injecting those examples into the prompt?

    • trhway an hour ago

      Reminds how few years ago Tesla (if i remember - Karpaty) described that in Autopilot they started to extract the 3rd model and use it to explicitly apply some static rules.

  • headcanon 2 hours ago

    I'll stay out of the inevitable "You're just adding a band aid! What are you really trying to do?" discussion since I kind of see the author's point and I'm generally excited about applying LLMs and ML at more tasks. One thing I've been thinking about is if an agent (or collection of agents) can solve a problem initially in a non-scalable way through raw inference, but then develop code to make parts of the solution cheaper to run.

    For example, I want to scrape a collection of sites. The agent would at first apply the whole HTML to the context to extract the data (expensive but it works), but then there is another agent that sees this pipeline and says "hey we can write a parser for this site so each scrape is cheaper", and iteratively replaces that segment in a way that does not disrupt the overall task.

    • cyrillite an hour ago

      Well, the standard advice for getting off the ground with most endeavours is “Do things that don’t scale”. Obviously scaling is nice, but sometimes it’s cheaper and faster to brute force it and worry about the rest later.

      The unscalable thing is often like “buy it cheap, buy it twice” but it’s also often like “buy it cheap, only fix it if you use it enough that it becomes unsuitable”. Makers endorse both attitudes. Knowing when which applies is the challenging bit

    • malfist an hour ago

      What do you mean the patient is bleeding out? We just need to use more bandaids!

  • hggigg an hour ago

    I wish this was funny but it’s not. We are doing this now. It has become like “because it’s got electrolytes” in our org.

    • glial 29 minutes ago

      Well, at least it's not blockchain or Kubernetes.

      • pqdbr 26 minutes ago

        The blockchain hype train was ridiculous. Textbook "solution looking for a problem" that every consultant was trying to push to every org, which had to jump onboard simply because of FOMO.

        • fluoridation 6 minutes ago

          I don't think that's quite right. It was businesses who were jumping at consultants to see how they stuff a blockchain into their pipeline to do the same thing they were already doing, all so they could put "now with blockchain!" on the website.

  • keeganpoppen an hour ago

    YES (although i'm hesitant to even say anything because on some level this is tightly-guarded personal proprietary knowledge from the trenches that i hold quite dear). why aren't you spinning off like 100 prompts from one input? it works great in a LOT of situations. better than you think it does/would, no matter your estimation of its efficacy.

    • pizza 6 minutes ago

      100 prompts doing what? Something like more selective, focused extraction of structured fields?