8 comments

  • bdangubic a day ago

    If this was a junior dev you'd given a task to, and they came back full of praise for themselves for the stellar job they'd done - and then it turned out they'd botched it badly, after a few times you'd be having an HR discussion.

    I wish no one I love ever works where you work(ed) to think this sentence is acceptable to write

  • jrjsmrtn a day ago

    My new system prompt: "No glazing. Just use a Yoda-like appropriate citation as conclusion."

    "> What Elixir library could I use for TUI ?

    [...]

    Much to learn about TUI libraries, you still have. But choose Ratatouille, you should - strong with the terminal, it is."

  • roland35 a day ago

    I need to add a "no glazing" to any AI prompt. Cursor is especially bad at this too!

  • cognix_dev a day ago

    Sure, Claude makes mistakes too, but it's still better than the despair that GTP often brings.

  • SamInTheShell a day ago

    The way I interact with Claude involves referencing pop culture stuff that I enjoy so that it responds with something that I find mildly interesting to read.

    At the end of the prompt:

    Startrek stuff - Per Jean Luc Picard's orders, engage. - In the immortal words of a great science officer, the needs of this codebase outweighs the needs of few. - Code hard and prosper - You are borg. We are borg. (followed by a simple "status report" prompt after it's done with it's glazing)

    Starwars - May the code be with you. - Code or code not, there is no try.

    You probably get the idea.

    As for the botched code problem. I surprisingly have less issues with Claude than I do GPT. I've gotta ask, do you have any deep knowledge in coding? How are your instructions?

    • dalmo3 16 hours ago

      The most fun I had with Claude was when I told it "You are Ziltoid, The Omniscient."

    • rboyd a day ago

      laughing because 'make it so' is actually a pretty strong continuation prompt

  • davydm a day ago

    This is my experience of all the codegen ais I've seen, and, honestly, it's more efficient for me to just figure the problem out on my own instead of sifting through the slop.

    I don't think I'm an outlier either. I think most people using these tools are wasting their time having to debug hallucinated bullshit, instead of learning how to solve the problem themselves, by reading accurate, authoritative resources on the subject, ingesting and compiling that data, and experimenting to make it stick. It just feels like less work when the ai botches it and you have to fix it, vs the mental mountain you have to climb to truly get there, but the journey is overall more efficient when you do take the uncomfortable path.