20 comments

  • threecheese 2 hours ago

    I have many more fears than just annoyances :)

    My biggest annoyance is hiding Thinking tokens; I have little trust in these aliens, and seeing how the sausage was made helped me to be more comfortable with eating it. Anthropic was the biggest provider that did not do this until recently, and they give a good rationale for the switch but that doesn’t make it less annoying. I also dislike the UX they put around it, “Hmm.”, “I should think about this” etc.

    • ofabioroma 2 hours ago

      This is WILD. And the fact everyone just accepts this fact makes it even worse. We’re relying our daily decisions on a closed chain of thought. Do you see this changing anytime soon?

      I think in the end it all boils down to a trust issue on the big labs

    • ofabioroma 2 hours ago

      By UX you mean the chat interface? Or the lack of transparency of it? Or both?

  • threecheese 2 hours ago

    Like you, I dislike that the providers make it intentionally difficult to retrieve conversation history from their web UIs; you can Copy/Paste easily, or use the OS Share feature to access a public version, but they make it very difficult for me to build tooling to extract the history - it requires website automation, or a browser plugin.

    • ofabioroma 2 hours ago

      Exactly. And I feel it gets even worse each new agent that gets released. Wanna test openclaw? Good luck exporting all of your contexts. What does your ai stack looks like? Are you still heavily using the web uis on your routine?

  • blinkbat 8 hours ago

    Just ask the previous model to provide handoff context

    • ofabioroma 7 hours ago

      Interesting. How do you personally ask that? Do you make this as a systematic approach? Like agents passing they "DNA" to the next?

      • threecheese 2 hours ago

        I do this every day, because Codex writes my requirements and Claude implements them. Just ask it for whatever you think the next model will need, tell it to be verbose if you like, and even have a second ChatGPT check it if you are worried. You can even give it a format, going as far as providing a specification or template if you do it frequently. Stick that template in both ChatGPT and claude projects so one can write it and the other can read it.

        Edit: I shouldn’t admit this, but I even have an ontology defined - RDF and all - for some of my LLM tasks. Its classes contain examples, and so is like a few-shot instruction, and it’s working scarily well for structuring tasks.

        • ofabioroma 2 hours ago

          Holy shit. That’s scarily clever. Do you trigger it at a certain max token spend ratio? And do you think it generalizes to pass all kinds of context or its tailored for structured tasks?

  • Ekaros 8 hours ago

    I just looked at some of the security recommendations. It seems that to build a secure system would be incredibly fiddly and involve lot of frankly weird and questionable stuff. Like probabilistic detection systems in essence in every multi-agent interface.

    Not to even mention looking at solutions for most basic things like prompt injections. Frankly laughable efforts. No where near what I would consider sufficient...

    And somehow they are trying to push this crap to everywhere... Before you even have these things in place...

    • ofabioroma 7 hours ago

      You're right. It's tricky. Essentially we want probabilistic systems to behave as deterministic ones. Do you see a light at the end of the tunnel?

      • Ekaros 7 hours ago

        Nope. But I am sceptic. Best I can see is that there is some somewhat useful cases in addition to more traditional stuff... And even then there will likely be lot of extra work for manual verification. And cost effectiveness is probably also a good question.

        • ofabioroma 2 hours ago

          I think when oss catches up with opus-like capabilities, we’re talking

  • Finnucane 8 hours ago

    The enormous waste of resources? The environmental destruction? The damage to education? To the labor market? I mean, it's a long list.

    • threecheese 2 hours ago

      I dislike the “brainwashed” comment from sibling, I believe it makes some assumptions. There aren’t any doubts that:

      - AI is extremely resource intensive, consuming electricity, water, silicon, etc at levels possibly never seen before in humanity’s history; whether that’s a waste or not is subjective - Massive datacenters are popping up like anthills, and coupled with R-flavored regulation rollback there is a definite risk for environmental impact - just like during our last industrialization push where we poisoned much of the country, leading to a massive rollout of environmental protections in the 1970s and 1980s - Students are taking advantage of LLMs to shirk school responsibilities. Whether this is damaging or not is subjective until proven, and AI may not be causal here (students may not have been getting the expected value from their education without LLMs, again remains to be proven) - Many companies have used AI as a justification for layoffs, who knows what’s actually true though. There is a very real fear across society that it will continue to impact jobs, and senior AI company leaders are fueling this with public predictions of massive labor shifts. Again, maybe they are lying, but can you blame anyone for worrying?

      There are counterarguments to all of these, but dismissing the fear as uneducated or brainwashed reveals your own priors and ignores all of these facts. It’s healthy to ingest OP’s criticisms - especially on a form populated most by Smart People (tm).

      • ofabioroma 2 hours ago

        I think you’re right. In a very narrow, short term scope. That’s the issue.

        The problem with this argument is that assumes the world is static. When trains were invented, they polluted a LOT. Technology evolved. Looking backwards, the amount of value unlocked by them outweighed by order of magnitude the short term pollution they generated. Inefficient in the short term. Generation changing over the longer horizon. Extend the timeframe of your argument. Do you think it holds 20 years from now when we have more efficient algorithms and energy generation technologies? I don’t think so.

      • ofabioroma 2 hours ago

        Be skeptic of those telling you that technological advances are bad. They usually want something from you. And it’s usually your vote.

        • krapp 2 hours ago

          What political office do you believe Finnucane is running for?

          • ofabioroma an hour ago

            Idk, but I think he left us with a pretty straightforward worldview

    • ofabioroma 7 hours ago

      I think you got brainwashed dude