3 comments

  • A_D_E_P_T 2 days ago

    Certainly not.

    Truth is, there are no longer any dead giveaways, let alone any where you can really catch an AI red-handed.

    Chat-GPT had, and still has, its quirks: Delves, "underscoring __" (and variants thereof, like "highlighting ___") "it's not just __, it's __," em-dashes, and various characteristic structural and word choices. (It could hardly ever resist ending its responses with a summary paragraph.)

    But some of these have been patched out. (I don't think I've seen "delve" in more than a year!) And, especially in GPT-5, the others have become less common and less obvious than they used to be.

    Besides, DeepSeek and Kimi-2 write in a completely different and more natural style. Gemini 2.5 is also a very natural writer with a generic style that has fewer identifying characteristics.

    So it has become very difficult to identify AI for certain...

  • codingdave 2 days ago

    No. LLMs were trained on human communication. LLMs may over-do some things, but none of those things are unique to LLMs. Every single thing you point at to ID an LLM is also done by humans.

  • 2 days ago
    [deleted]