I was banned from Claude for scaffolding a Claude.md file?

(hugodaniel.com)

258 points | by hugodan 4 hours ago ago

201 comments

  • bastard_op 3 hours ago

    I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

    Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

    Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

    I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

    I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

    • throwup238 2 hours ago

      Anthropic has been flying by the seat of their pants for a while now and it shows across the board. From the terminal flashing bug that’s been around for months to the lack of support to instabilities in Claude mobile and Code for the web (I get 10-20% message failure rates on the former and 5-10% on CC for web).

      They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.

      • sixtyj 2 hours ago

        They whistleblowed themselves that Claude Cowork was coded by Claude Code… :)

        • throwup238 42 minutes ago

          You can tell they’re all vibe coded.

          Claude iOS app, Claude on the web (including Claude Code on the web) and Claude Code are some of the buggiest tools I have ever had to use on a daily basis. I’m including monstrosities like Altium and Solidworks and Vivado in the mix - software that actually does real shit constrained by the laws of physics rather than slinging basic JSON and strings around over HTTP.

          It’s an utter embarrassment to the field of software engineering that they can’t even beat a single nine of reliability in their consumer facing products and if it wasn’t for the advantage Opus has over other models, they’d be dead in the water.

        • notsure2 an hour ago

          Whistleblowed dog food.

          • b00ty4breakfast 37 minutes ago

            normally you don't share your dog food when you find out it actually sucks.

      • Bombthecat 2 hours ago

        Well, they vibe code almost every tool at least

        • tuhgdetzhh 2 hours ago

          Claude Code has accumulated so much technical dept (+emojis) that Claude Code can no longer code itself.

          • wwweston 38 minutes ago

            What’s the opposite of bootstrapping? Stakebooting?

    • uxcolumbo an hour ago

      Have you tried any of the leading open weight models, like GLM etc. And how does chatGPT or Gemini compare?

      And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.

    • unyttigfjelltol an hour ago

      > I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it.

      Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?

      • b00ty4breakfast 32 minutes ago

        Support has been automated for a while, LLMs just made it even less useful (and it wasn't very useful to begin with; for over a decade it's been a Byzantine labyrinth of dead-ends, punji-pits and endless hours spent listening to smooth jazz).

    • hecanjog 2 hours ago

      > I've been using it effectively to write software now (I am NOT a developer)

      What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.

      • bastard_op 34 minutes ago

        About my not having a software background, I started this as I've been a network/security/systems engineer/architect/consultant for 25 years, but never dev work. I can read and follow code well enough to debug things, but I've never had the knack to learn languages and write my own. Never really had to, but wanted to.

        This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.

        I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.

        Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.

      • ofalkaed 23 minutes ago

        My use is considerably simpler than GP's but I use it anytime I get bogged down in the details and lose my way, just have Claude handle that bit of code and move on. Also good for any block of code that breaks often as the program evolves, Claude has much better foresight than I do so I replace that code with a prompt.

        I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.

      • bastard_op an hour ago

        I started using claude-code, but found it pretty useless without any ability to talk to other chats. Claude recommended I make my own MCP server, so I did. I built a wrapper script to invoke anthropic's sandbox-runtime toolkit to invoke claude-code in a project with tmux, and my mcp server allows desktop to talk to tmux. Later I built in my own filesystem tools, and now it just spawns konsole sessions for itself invoking workers to read tasks it drops into my filesystem, points claude-code to it, and runs until it commits code, and then I have the PM in desktop verify it, do the final push/pr/merge. I use an approval system in a gui to tell me when claude is trying to use something, and I set an approve for period to let it do it's thang.

        Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.

        At least I was until Claude started crapping the bed lately.

    • spike021 2 hours ago

      > where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive

      I had this start happening around August/September and by December or so I chose to cancel my subscription.

      I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.

    • thtmnisamnstr 2 hours ago

      Gemini CLI is a solid alternative to Claude Code. The limits are restrictive, though. If you're paying for Max, I can't imagine Gemini CLI will take you very far.

      • bastard_op an hour ago

        I tried Gemini like a year or so ago, and I gave up after it directly refused to write me a script and instead tried to tell me how to learn to code. I do not make this up.

        • mkl 22 minutes ago

          That's at least two major updates ago. Probably worth another try.

      • Conscat an hour ago

        Gemini CLI regularly gets stuck failing to do anything after declaring its plan to me. There seems to be no way to un-lock it from this state except closing and reopening the interface, losing all its progress.

      • andrewinardeer an hour ago

        Kilocode is a good alt as well. You can plug into OpenRouter or Kilocode to access their models.

    • Bombthecat 2 hours ago

      Have a max plan, didn't use it much the last few days. Just used it to explain me a few things with examples for a ttrpg. It just hanged up a few times.

      Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me

      • bastard_op an hour ago

        That's about what I'm getting too! It just literally stops at some point, and any new prompt it starts, then immediately stops. This was even on a fairly short conversation with maybe 5-6 back and forth dialogs.

    • syntaxing 2 hours ago

      Serious question, why is codex and mistral(vibe) not a real alternative?

      • bastard_op an hour ago

        I tried codex, using my same sandbox setup with it. Normally I work with sonnet in code, but it was stuck on a problem for hours, and I thought hmm, let me try codex. Codex just started monkey patching stuff and broke everything within like 3-4 prompts. I said f-this, went back to my last commit, and tried Opus this time in code, which fixed the problem within 2 prompts.

        So yeah, codex kinda sucks to me. Maybe I'll try mistral.

  • omer_balyali 3 hours ago

    Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

    Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

    Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

    As their ads say: "Keep thinking. There has never been a better time to have a problem."

    I've been thinking since then, what was the problem. But I guess I will "Keep thinking".

  • indiantinker 7 minutes ago

    Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. Frank Herbert, Dune, 1965

  • cortesoft 3 hours ago

    I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

    I think I kind of have an idea what the author was doing, but not really.

    • Aurornis 3 hours ago

      Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

      Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

      There are so many things about this article that don't make sense:

      > I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

      I can't even understand what they're trying to communicate. I guess they're referring to Google?

      There is, without a doubt, more to this story than is being relayed.

      • fluoridation 3 hours ago

        "I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

        Non-disabled organization = the first party provider

        Disabled organization = me

        I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.

      • nawgz 2 hours ago

        > I'm talking about obvious abusive behavior, akin to griefing other users

        Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?

        • Aurornis an hour ago

          Plenty of reasons: Abusing private APIs, using false info to sign up (attempts to circumvent local regulations), etc.

          • nawgz an hour ago

            These are in no way similar to griefing other users, they are attacks on the platform...

      • dragonwriter 3 hours ago

        The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

        It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.

    • superb_dev 3 hours ago

      The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again

      • olalonde an hour ago

        I don't understand how having two separate instances of Claude helps here. I can understand using multiple Claude instances to work in parallel but in this case, it seems all this process is linear...

        • layer8 an hour ago

          The point is to get better prompt corrections by not sharing the same context.

      • Aurornis 3 hours ago

        More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.

        • schnebbau 2 hours ago

          They were probably using an unapproved harness, which are now banned.

        • tstrimple 3 hours ago

          This does sound sus. I have CC update other project's claude.md files all the time. I've got a game engine that I'm tinkering with. The engine and each of the game concepts I play around with have their own claude.md. The purpose of writing the games is to enhance the engine, so the games have to be familiar with the engine and often engine features come from the game CC rather than the engine CC. To keep the engine CC from becoming "lost" about features implemented each game project has instructions to update the engine's claude.md when adding / updating features. The engine CC bootstraps new game projects with a claude.md file instructing it how to keep the engine in sync with game changes as well as details of what that particular game is designed to test or implement within the engine. All sorts of projects writing to other project's claude.md files.

      • raincole 3 hours ago

        Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.

        • pixl97 2 hours ago

          >if it's the real reason they got banned.

          I mean, what a country should do it put a law in effect. If you ban a user, the user can submit a request with their government issued ID and you must give an exact reason why they were banned. The company can keep this record in encrypted form for 10 years.

          Failure to give the exact reason will lead to a $100,000 fine for the first offense and increase from there up to suspension of operations privileges in said country.

          "But, but, but hackers/spammers will abuse this". For one, boo fucking hoo. For two, just add to the bill "Fraudulent use of law to bypass system restrictions is a criminal offense".

          This puts companies in a position where they must be able to justify their actual actions, and it also puts scammers at risk if they abuse the system.

          • benjiro 36 minutes ago

            Companies will simply give some kind of standard answer, that is legally "cover our butts" and be done with it.

            Its like that cookie wall stuff, how much dark patterns are implemented. They followed the letter of the law, not the spirit of the law.

            To be honest, i can also see the point from the company side. Giving a honest answer can just anger people, to the point they sue. People are often not as rational as we all like our fellow humans to be.

            Even if the ex-client lose in court, that is how much time you wasted on issue clients... Its one thing if your a big corporation with tons of lawyers but small companies are often not in the position to deal with that drama. And it can take years to resolve. Every letter, every phone call to a lawyer, it stacks up fast! Do you get your money back? Maybe, depends on the country, but your time?

            I am not pro companies but its often simply better to have the attitude "you do not want me as your client, let me advocate for your competitor and go there".

    • alistairSH 3 hours ago

      You're not alone.

      I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...

      • Romario77 3 hours ago

        One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

        The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

        And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.

      • healsdata 17 minutes ago

        The author code have easily shared the last version of Claude.md that had the all caps or whatever, but didn't. Points to something fishy in my mind.

      • falloutx 3 hours ago

        This tracks with Anthropic, they are actively hostile to security researchers.

      • layer8 an hour ago

        It wasn’t circular. TFA explains how the author was always in the loop. He had one Claude instance rewrite the CLAUDE.MD of another Claude instance whenever the second one made a mistake, but relaying the mistake to the first instance (after recognizing it in the first place) was done manually by the author.

      • rvba 3 hours ago

        What is wrong with circular prompt injection?

        The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.

        • darkwater an hour ago

          > What is wrong with circular prompt injection?

          That you might be trying to jailbreak Claude and Anthropic does not like that (I'm not endorsing, just trying to understand).

      • redeeman 3 hours ago

        i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?

      • lazyfanatic42 3 hours ago

        Author really comes off unhinged throughout the article to be frank.

        • pjbeam 3 hours ago

          My take was more a kind of amusing laughing-through-frustration but also enjoying the ride just a little bit insouciance. Tastes vary of course, but I enjoyed the author's tone and pacing.

        • superb_dev 3 hours ago

          Did we read the same article? The author comes of as pretty frustrated but not unhinged

          • ryandrake 3 hours ago

            I wouldn't say "unhinged" either, but maybe just struggling to organize and express thoughts clearly in writing. "Organizations of late capitalism, unite"?

            • Bootvis 2 hours ago

              The author was frustrated that the error message identified him as an organisation (that was disabled) and mockingly refers to himself as the (disabled) organisation in the post.

              At least, that’s my reading but it appears it confuses about half of the commenters here.

              • ryandrake an hour ago

                I think if one's readers need an "ironic euphemism decoder glossary" just to understand the message, it could use a little re-writing.

                • layer8 an hour ago

                  It was perfectly understandable to me. Maybe cultural differences? You seem to be American, OP Portuguese, and myself European as well.

                  • ashirviskas 27 minutes ago

                    Another European chiming in, I enjoyed OPs article.

        • staticman2 3 hours ago

          Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.

    • ankit219 3 hours ago

      My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

      if this is true, the learning is opus 4.5 can hijack system prompts of other models.

      • kstenerud 2 hours ago

        > When you write in all caps, it triggers sort of a alert at Anthropic

        I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?

        • ankit219 2 hours ago

          from what i know, it used to be that if you want to assertively instruct, you used all caps. I don't know if it succeeds today. I still see prompts where certain words are capitalized to ensure model pays attention. What i mean was not just capitalization, but a combination of both capitalization and changing the behavior of the model for trying to get it to do something.

          if you were to design a system to prevent prompt injections and one of surefire ways is to repeatedly give instructions in caps, you would have systems dealing with it. And with instructions to change behavior, it cascades.

      • phreack an hour ago

        Wait what? Really? All caps is a bannable offense? That should be in all caps, pardon me, in the terms of use if that's the case. Even more so since there's no support at the highest price point.

        • ankit219 37 minutes ago

          Its a combination. All caps is used in prompts for extra insistence, and has been common in cases of prompt hijacking. OP was doing it in combination with attempting to direct claude a certain way, multiple times, which might have looked similar to attempting to bypass teh system prompt.

    • exitb 3 hours ago

      Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.

    • verdverm 36 minutes ago

      Sounds like OP has multiple org accounts with Anthropic.

      The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit

      The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.

    • tobyhinloopen 3 hours ago

      I had to read it twice as well, I was so confused hah. I’m still confused

      • rtkwe 3 hours ago

        They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.

    • anigbrowl 3 hours ago

      Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.

    • Romario77 3 hours ago

      You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

      • dragonwriter 3 hours ago

        > Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

        Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)

        • ryandrake 3 hours ago

          I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!

    • vimda 2 hours ago

      Yeah, referring to yourself once as a "disabled organisation" is a good bit, referencing anthropics silly terminology. Keeping it for the duration made this a very hard follow

    • mmkos 2 hours ago

      You and me, brother. The writing is unnecessarily convoluted.

    • cr3ative 3 hours ago

      Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…

  • areoform 3 hours ago

    I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

    Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

    I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

    I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

    • eightysixfour 3 hours ago

      > Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

      I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

      Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

      • swiftcoder 3 hours ago

        > shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

        Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

        • eightysixfour 3 hours ago

          I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

          It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

        • 0xferruccio 3 hours ago

          to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

          and these are people are not junior developers working on trivial apps

          • swiftcoder 3 hours ago

            Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine

        • pixl97 2 hours ago

          As someone who does support I think the end result looks a lot different.

          AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.

          AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.

          The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.

          • swiftcoder 2 hours ago

            I think that's more the area I'd expect genAI to be useful (support folks using it as a tool to address specific scenarios), rather than just replacing your whole support org with a branded chatbot - which I fear is what quite a few management types are picturing, and licking their chops at the resulting cost savings...

        • pinkmuffinere 3 hours ago

          Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.

          • eightysixfour 3 hours ago

            Consultant to, so yes. It could have replaced me and a ton of the work of the people I was supporting.

            • pinkmuffinere 3 hours ago

              Ah I see, that definitely lends some weight claim then.

        • Terr_ 3 hours ago

          > bullish [...] but not my specialty

          IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.

          __________

          1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."

          2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."

          3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"

          4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."

      • danielbln 3 hours ago

        There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.

        • eightysixfour 3 hours ago

          Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

          There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

          • mikkupikku an hour ago

            Demanding a person on the phone use the website on your behalf is a great life hack, I do it all the time. Often they try to turn me away saying "you know you can do this on our website", I just explain that I found it confusing and would like help. If you're polite and pleasant, people will bend over backwards to help you out over the phone.

            With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.

            • eightysixfour an hour ago

              Sorry, I disagree here. For the specific flow I'm talking about - monthly recurring payments - the UX is about as highly optimized for success as it gets. There are ways to do it via the web, on the phone with a bot, bill pay in your own bank, set it up in-store, in an app, etc.

              These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.

              • mikkupikku an hour ago

                Recurring monthly payments I set to go automatic, but setting that up in the first place I usually do through a phone call. I know some people just want somebody to talk to, same as going through the normal checkout lines at the grocery store, but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

                • eightysixfour an hour ago

                  > but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

                  Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.

                  There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.

                  But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."

      • hn_acc1 2 hours ago

        >Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

        Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?

    • lukan 3 hours ago

      I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.

      • atonse 3 hours ago

        My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

        But at the same time, they have been hiring folks to help with Non Profits, etc.

    • WarmWash 3 hours ago

      Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.

      • embedding-shape 3 hours ago

        > Anthropic's strategy seems to be to just focus on coding, and they do it well.

        Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview

        • WarmWash 3 hours ago

          Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

          Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.

        • Ethee 3 hours ago

          Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.

      • 0xbadcafebee 3 hours ago

        Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

        OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

        Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

      • arcanemachiner 3 hours ago

        Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?

        • WarmWash 3 hours ago

          You'll get 30 different opinions and all those will disagree with each other.

          Use the top models and see what works for you.

    • Lerc 2 hours ago

      There is a discord, but I have not found it to be the friendliest of places.

      At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

      It seems now they have a policy of

          Warning on First Offense → Ban on Second Offense
          The following behaviors will result in a warning. 
          Continued violations will result in a permanent ban:
      
          Disrespectful or dismissive comments toward other members
          Personal attacks or heated arguments that cross the line
          Minor rule violations (off-topic posting, light self-promotion)
          Behavior that derails productive conversation
          Unnecessary @-mentions of moderators or Anthropic staff
      
      I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

      I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.

    • magicmicah85 3 hours ago

      https://support.claude.com/en/articles/9015913-how-to-get-su...

      Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.

    • csours 3 hours ago

      Human attention will be the luxury product of the next decade.

    • munk-a 3 hours ago

      > They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

      Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

      I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

      • throwawaysleep 3 hours ago

        > to send their most frustrated customers through a chatbot

        But do those frustrated customers matter?

        • munk-a 3 hours ago

          I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.

          • throwawaysleep 2 hours ago

            Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.

    • throwawaysleep 3 hours ago

      Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

      I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

      It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

      > I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

      Are there enough people who need support that it matters?

      • pixl97 2 hours ago

        >I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support.

        In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.

        'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.

    • furyofantares 3 hours ago

      > I recently found out that there's no such thing as Anthropic support.

      The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

      • kmoser 3 hours ago

        If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.

        • furyofantares an hour ago

          I mean the comment says they literally don't have support and also complains they don't have a support bot, when they have both.

          https://support.claude.com/en/collections/4078531-claude

          > As a paid user of Claude or the Console, you have full access to:

          > All help documentation

          > Fin, our AI support bot

          > Further assistance from our Product Support team

          > Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.

  • landryraccoon 3 hours ago

    This blog post feels really fishy to me.

    It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

    For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.

    • swiftcoder 3 hours ago

      > It should have been straightforward for the author to excerpt some of the prompts he was submitting

      If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.

    • hotpotat 2 hours ago

      I understand where you’re coming from, but anecdotally the same thing happened to me except I have less clarity on why and no refund. I got an email back saying my appeal was rejected with no recourse. I was paying for max and using it for multiple projects, no other thing stands out to me as a cause for getting blocked. Guess you’ll have to take my word for it to, it’s hard to prove the non-existence of definitely-problematic prompts.

    • jeffwask an hour ago

      What's fishy? That it's impossible to talk to an actual human being to get support from most of Big Tech or that support is no longer a normal expectation or that you can get locked out of your email, payment systems, phone and have zero recourse.

      Because if you don't believe that boy, do I have some stories for you.

    • foxglacier 2 hours ago

      It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

      Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.

    • ta988 2 hours ago

      There will always be the "ones" that come with their victim blaming...

      • mikkupikku 2 hours ago

        It's not "victim blaming" to point out that we lack sufficient information to really know who the victim even is, or if there's one at all. Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

        (My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)

        • ffsm8 2 hours ago

          Skip to the end of the article.

          He says himself that this is a guess and provides the "missing" information if you are actually interested in it.

          • mikkupikku 2 hours ago

            I read it, and it's not enough to make a judgement either way. For all we know none of this had anything to do with his ban and he was banned for something he did the day before. There's no way for third parties to be sure of anything in this kind of situation, where one party shares only the information they wish and the other side stays silent as a matter of default corporate policy.

            I am not saying that the author was in the wrong and deserved to be banned. I'm saying that neither I nor you can know for sure.

            • exe34 an hour ago

              we don't know your true motivations for making this series of posts and doubling down - and yet we give you the benefit of the doubt.

              • mikkupikku an hour ago

                Asserting that somebody is "victim blaming" isn't giving somebody the benifit of the doubt, and in the context of a scenario were few if any relevant facts are known reveals a very credulous mindset.

  • pavel_lishin 3 hours ago

    They don't actually know this is why they were banned:

    > My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

    > Or I don't know. This is all just a guess from me.

    And no response from support.

  • wewewedxfgdf 30 minutes ago

    The future (the PRESENT):

    You are only allowed to program computers with the permission of mega corporations.

    When Claude/ChatGPT/Gemini have banned you, you must leave the industry.

    When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.

  • ziml77 an hour ago

    Why is the author so confused about the use of the word "organization"? Every account in Claude is part of an organization even if it's an organization of one. It's just the way they have accounts structured. And it's not like they hide this fact. It shows you your organization ID right on your account page. I'm also pretty sure I've seen the term used when performing other account-related actions.

  • OsrsNeedsf2P an hour ago

    I had my Claude Code account banned a few months ago. Contacted support and heard nothing. Registered a new account and been doing the same thing ever since - no issues.

    • NewJazz an hour ago

      Did you have to use a different phone number? Last time I tried using Claude they wouldn't accept my jmp.chat number.

  • preinheimer 4 hours ago

    > AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

    I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.

    • munk-a 3 hours ago

      You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.

      • exe34 an hour ago

        doesn't he keep having to lobotomize it for lurching to the left every time it gets updated with new facts?

  • kordlessagain an hour ago

    > My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

    Is it me or is this word salad?

    • afandian an hour ago

      It's deliberately not straightforward. Just like the joke about Americans being shoutier than Brits. But it is meaningful.

      I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.

    • infermore an hour ago

      it's you

  • jordemort 3 hours ago

    Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.

  • writeslowly 3 hours ago

    I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.

  • dev_l1x_be 10 minutes ago

    We need local models asap.

  • miohtama an hour ago

    Luckily there is little vendor lock in and likes of https://opencode.ai/ are picking up the slack

  • syntaxing 2 hours ago

    While it sucks, I had great results replacing Sonnet 4.5 with GLM 4.7 in Claude code. Vastly more affordable too ($3 a month for the pro equivalent). Can’t say much about Opus though. Claude code forces me to put a credit card on file so they can charge over usage. I don’t mind they charge me, I do mind that there’s no apparent spending limit and hard to tell how much “inclusive” opus tokens I have left.

    • enraged_camel 2 hours ago

      Having used both Opus 4.5 and GLM 4.7, I think the former is at least eight months ahead of the latter, if not much more.

  • tomwphillips 2 hours ago

    The post is light on details. I'd guess the author ended up hammering the API and they decided it was abuse.

    I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.

  • onraglanroad 3 hours ago

    So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

    Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

    (Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)

    • gpm 3 hours ago

      I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.

    • staticman2 3 hours ago

      I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.

  • ipaddr 3 hours ago

    You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

    I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

    For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.

    • bee_rider 3 hours ago

      LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?

      • causalmodels 3 hours ago

        Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.

      • exe34 an hour ago

        Claude code with opus is a completely different creature from aider with qwen on a 3090.

        The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)

    • joshribakoff 2 hours ago

      Anthropic is lucky their credit card processor has not cut them off due to excessive disputes that stem from their non existent support.

  • daft_pink 2 hours ago

    As a Claude Max user, that generally prefer’s claude, I will say that Gemini is working pretty well right now and I’m considering setting up a google workspace account so I can get Gemini with decent privacy.

  • rbren 2 hours ago

    This is why it's worth investing in a model-agnostic setup. Don't tie yourself into a single model provider!

    OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic

  • tobyhinloopen 3 hours ago

    So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?

    • Aurornis 3 hours ago

      I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.

    • alistairSH 3 hours ago

      It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?

      • Hackbraten 2 hours ago

        They were trying to optimize a CLAUDE.md file which belonged to a project template. The outer Claude instance iterated on the file. To test the result, the human in the loop instantiated a new project from the template, launched an inner Claude instance along with the new project, assessed whether inner Claude worked as expected with the CLAUDE.md in the freshly generated project. They then gave the feedback back to outer Claude.

        So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.

      • epolanski 3 hours ago

        What would be bad in that?

        Writing the best possible specs for these agents seems the most productive goal they could achieve.

        • NitpickLawyer 3 hours ago

          I think the idea is fine, but what might end up happening is that one agent gets unhinged and "asks" another agent to do more and more crazy stuff, and they get in a loop where everything gets flagged. Remember that "bots configured to add a book at +0.01$ on amazon, reached 1M$ for the book" a while ago. Kinda like that, but with prompts.

          • epolanski 3 hours ago

            I still don't get it, get your models better for this far fetched case, don't ban users for a legitimate use case.

        • alistairSH 2 hours ago

          Nothing necessarily or obviously bad about it, just trying to think through what went wrong.

      • andrelaszlo 3 hours ago

        Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.

        • alistairSH an hour ago

          From what I'm reading in other comments, the problem was Claude1 got increasingly "frustrated" with Claude2's inability to do whatever the human was asking, and started breaking it's own rules (using ALL CAPS).

          Sort of like MS's old chatbot that turned into a Nazi overnight, but this time with one agent simply getting tired of the other agent's lack of progress (for some definition of progress - I'm still not entirely sure what the author was feeding into Claude1 alongside errors from Claude2).

  • zmmmmm 2 hours ago

    is there a benefit of using a separate claude instance to update the CLAUDE.md of the first? I always want to leverage the full context of the situation to help describe what went wrong, so doing it "inline" makes more sense.

  • prmoustache 2 hours ago

    It should be mentionned in the title that these are just speculations.

  • quantum_state 3 hours ago

    Is it time to move to open source and run model locally with an DGX Spark?

    • blindriver 3 hours ago

      Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.

  • languagehacker 3 hours ago

    Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.

    • rtkwe 3 hours ago

      That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.

  • kosolam 2 hours ago

    Hmm so how are the alternatives? Just in case I will get banned for nothing as well. I’m riding cc with opus all day long these days.

  • jitl 3 hours ago

    I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.

    • ryandrake 2 hours ago

      It would at least be nice to know exactly what you did wrong. This whole "You did something wrong. Please read our 200 page Terms of Service doc and guess which one you violated." crap is not helpful and doesn't give me (as an unrelated third party) any confidence that I won't be the next person to step on a land mine.

  • kmeisthax 3 hours ago

    Another instance of "Risk Department Maoism".

    If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

    Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

    Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.

  • f311a 3 hours ago

    Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

    What are you gonna do with the results that are usually slop?

    • mikkupikku an hour ago

      If the slop passes my tests, then I'm going to use it for precisely the role that motivated the creation of it in the first place. If the slop is functional then I don't care that it's slop.

      I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.

  • blindriver 3 hours ago

    There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.

  • heliumtera 3 hours ago

    Well at least they didn't email the press and called the FBI on you?

  • lukashahnart 3 hours ago

    > I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

    I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

    Isn't that the point of capitalism?

    • exe34 an hour ago

      that's not what capitalism mean. you might be thinking of a free market.

  • lifetimerubyist 4 hours ago

    bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man

    • properbrew 3 hours ago

      I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

      Even filled in the appeal form, never got anything back.

      Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.

      • codazoda 3 hours ago

        Since you were forced, are you getting good results from them?

        I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

        Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

        Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.

        • properbrew 2 hours ago

          For writing decent code, absolutely not, maybe a simple bash script or the obscure flags to a command that I only need to run once and couldn't be bothered to google or look through the man page etc. I'm using smaller models for less coding related stuff.

          Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI. I don't think you're ever going to see that level of power locally (I very much hope to be wrong about that). I will move over to using a cloud provider with a large gpt-oss model or whatever is the current leader at the time if/when my OpenAI account gets blocked for no reason.

          The M-series chips in Macs are crazy, if you have the available memory you can do some cool things with some models, just don't be expecting to one shot a complete web app etc.

      • falloutx 3 hours ago

        you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.

      • anothereng 3 hours ago

        just use a different email or something

        • ggoo 3 hours ago

          This happened to me too, you need a phone number unfortunately

    • lazyfanatic42 3 hours ago

      this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.

  • moomoo11 3 hours ago

    Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.

  • oasisbob 3 hours ago

    > Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

    This blog post could have been a tweet.

    I'm so so so tired of reading this style of writing.

    • LPisGood 3 hours ago

      What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?

    • red_hare 3 hours ago

      Alas, the 2016 tweet is the 2026 blog post prompt.

  • rsync 3 hours ago

    You mean the throwaway pseudonym you signed up with was banned, right?

    right ?

  • red_hare 3 hours ago

    This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

    But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.

    • mrweasel 3 hours ago

      Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

      I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.

    • viccis 3 hours ago

      Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

      Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."