I like the term “organic literature.” A significant amount of readers have no interest whatsoever in
generated prose, so there is definitely a viable market in human provenance.
An independent certification body is quite an old-world solution for a problem like this, but I’m not sure this is something that can be done mathematically. A web of trust may be all we have.
Unfortunately, like most other kinds of commercial art, the mere presence of generated literature waters down the market enough to make actual literature essentially a leisure activity. Sure there was always crap, derivative filler books — it’s just that the ratio will now be 1000x worse and the better of the books just won’t justify the funding for intensive work and novel research that they used to, so even the good ones will probably be worse. Yet another example of the efficiency-obsessed more cheaper > less more expensive mentality making our world worse.
This idealistic objective is highly commendable, but the fight could be futile. As you would need AI to do the work of detection. Then there will be another movement to do "organic detection" of "organic content". And the story goes on.
Think of interview candidates rejected by AI and employees fired by AI, or that case where a snack pack was identified by AI as a weapon in a student's pocket. This will lead to "organic decision making".
I disagree. AI use is diffuse. An author is specific. Having people label their work as AI free is accountable in a way trying to require AI-generated work be labeled is not.
> similar to those found in cigarettes
Hyperbole undermines your argument. We have decades of rigorous and international evidence for the harms from cigarettes. We don’t for AI.
Saying "I think X should have a warning, like cigarettes do" is not the making the claim "X is harmful in a way that is comparable to cigarettes." The similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes.
We need this for technical books. I was a chapter into something the other day before deciding I’d been hoodwinked into reading someone’s ChatGPT output
I've noticed entire publishers on Amazon which are just fly-by-night AI slop, probably printed on-demand too.
For example, I stumbled on https://www.amazon.com/dp/B0DT4TKY58 and had never heard of the author. Their page (https://www.amazon.com/stores/author/B004LUETE8) suggested they were incredibly prolific in a huge number of areas which already felt off. No information about "Robert Johnson" was available either. The publisher, HiTeX Press (https://www.amazon.com/s?k=HiTeX+Press) has a few other authors with similarly generic names and no information available about them, each the author of numerous books spanning a huge array of topics.
It feels even more bewildering and disheartening to see AI slop come into the physical world like this.
This depends on the subject of the book, but there are enough books written pre-1970 (or some other year one is comfortable with, before the era of “book spinners”, AI etc) to last multiple lifetimes. I used to spend hours and hours in bookstores, but so many books these days (AI or otherwise) don’t seem that interesting. Many, many books could just be 3 page articles, but stretched to 150 page books.
So yeah, simply filtering by year published could be a start
Right, the same assholes gaming the system with slop would just game whatever system you tried to put around them. It's not like you can stand over someone the whole time they work to ensure it's real.
This has the same problems any DRM has. People who want to bypass the process will find a way, but legitimate people get caught up in weird messes.
I'm so happy I'm not doing any school/academic work anymore, because AI writing detection tools (I learned English though reading technical docs; of course my writing style is a bit clinical) and checking the edit history in a Google Docs document would've both fucked me over.
Even if every single page was hand written on camera, that could not prove that no AI was used.
Did the author come up with the main ideas, character arcs or plot devices himself? Did he ever seek assistance from AI to come up with new plot points, rewrite paragraphs, create dialog?
I think for me I’m just going to accept that I won’t be reading any modern fiction, likely ever. It isn’t like there isn’t more than I could read in multiple lifetimes already out there that is pre, say, 2010. But the other side is that fiction has never been worse, because the commercial impetus to become a published fiction writer has never been lower (literally since before the 1600s, given functional literacy levels and the amount of fiction reading the average person does). The Steinbecks of the world aren’t writing novels in 2025.
I wonder how this works since authors are more and more likely to use AI to spell check, fix wording, find alternate words, and all manner of other things. It might be useful to understand the “rules” for what “human” means.
I too am open for business, for a modest fee I will arrange to meet a book publisher in nyc for a firm handshake to cement a declaration from them that they are publishing books not made with AI. I will then send a formal email saying they may publish a little gold star on their book, and my preeminence as a member of the literary elite should carry it through. I'm doing this for the people because I _care_.
Just another rent-seeker. I mostly choose books based on word of mouth recommendations or liking other things by the same author. This is very resistant to slop from AI and to the large amounts of rubbish that has always been published.
Why? Can't it be done same way it's done with copyrighted material: by checking the authors process?
(Because at least in EU law permits writing basically same thing, if both authors reached it organically - have a trail of drafts, other writing process documents. As long as you proved you came upon it without influence from the other author.)
Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Now, as for if anyone cares about such proofs is another topic.
No technical ability required to verify humans as humans. You just have to close your laptop and meet at a coffee shop. Surprisingly many deals are done this way, because humans like other humans.
This is a big problem, though I would be slow to trust anyone purporting to address this problem. (Though, to their credit, this Books by People team is more credible than the bog-standard pair of 20yo Bay Area techbro grifters I expected.)
Reportedly, Kindle has already been flooded with "AI" generated books. And I've heard complaints from authors, of AI superficial rewritings of their own books being published by scammers. (So, not only "AI, write a YA novel, to the market, about a coming of age vampire young woman small town friends-to-lovers romance", but "AI, write a new novel in the style of Jane Smith, basically laundering previous things she's written" and "AI, copy the top-ranked fiction books in each category on Amazon, and substitute names of things, and how things are worded.")
For now, Kindle is already requiring publishers/authors to certify on which aspects of the books AI tools were used (e.g., text, illustrations, covers), something about how the tools were used (e.g., outright generation, assistive with heavy human work, etc.), and which tools were used. So that self-reporting is already being done somewhere, just not exposed to buyers yet.
That won't stop the dishonest, but at least it will help keep the honest writers honest. For example, if you, an honest writer, consider for a moment using generative AI to first-draft a scene, an awareness that you're required to disclose that generative AI use will give you pause, and maybe you decide that's not a direction you want to go with your work, nor how you want to be known.
Incidentally, I've noticed a lot of angry anti-generative-AI sentiment among creatives like writers and artists. Much more than among us techbros. Maybe the difference is that techbros are generally positioning ourselves to profit from AI, from copyright violations, selling AI products to others, and investment scams.
What is the point of this? Any publishing house can just "self certify" that no AI was used. Why would it be necessary to have an outside organization, who can not validate AI use anyway and just has to rely on the publisher.
Writing a book is, in most cases, something which happens between the author and their writing medium, how could any publisher verify anything about AI use, except in the most obvious cases?
The one thing which matters here is honesty and trust and I do not see how an outside organization could help in creating that honesty and maintaining that trust.
I don't care if AI wrote the book, if the book is good. The problem is that AI writes badly and pointlessly. It's not even a good editor, it 1) has no idea what you are talking about, and 2) if it catches some theme, it thinks the best thing to do is to repeat it over and over again and make it very, very clear. The reason you want to avoid LLM books is the same reason why you should avoid Gladwell books.
If a person who I know has taste signs off on a 100% AI book, I'll happily give it a spin. That person, to me, becomes the author as soon as they say that it's work that they would put their name on. The book has become an upside-down urinal. I'm not sure AI books are any different than cut-ups, other than somebody signed a cut-up. I've really enjoyed some cut-ups and stupid experiments, and really projected a lot onto them.
My experience in running things I've written through GPT-5 is that my angry reaction to its rave reviews, or its clumsy attempts to expand or rewrite, are stimulating in and of themselves. They often convince me to rewrite in order to throw the LLM even farther off the trail.
Maybe a lot of modern writers are looking for a certification because a lot of what they turn out is indistinguishable cliché, drawn from their experiences watching television in middle-class suburbs and reading the work of newspaper movie critics.
Lastly, everything about this site looks like it was created by AI.
> I don't care if AI wrote the book, if the book is good.
Not so sure. Books are not all just entertainment but they also develop one's ouook on life, relationships, morality etc. I mean, of course books can also be written by "bad" people to propagate their view of things, but at least you're still peeking into the views and distilled experience of a fellow human who lived a personal life.
Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers. Do that on a large enough scale and it's really hard to tell what kind of effect it has.
> Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers.
There is some of this even without AI. Plenty of modern pulpy thriller and romance books for example are highly market-optimised by now.
There are thousands of data points out there for what works and doesn't and it would be a very principled author who ignores all the evidence of what demonstrably sells in favour of pure self-expression.
Then again, AI allows to turbocharge the analysis and pluck out the variables that statistically trigger higher sales. I'd be surprised if someone isn't right now explicitly training a Content-o-matic model on the text of books along with detailed sales data and reviews. Perhaps a large pro-AI company with access to all the e-book versions, 20 years of detailed sales data, as well as all telemetry such as highlighted passages and page turns on their reader devices? Even if you didn't or couldn't use it to literally write the whole thing, you can have it optimise the output against expected sales.
What about capitalism created AI? China is not a purely capitalistic society and they have AI too… I don’t see anything specific about capitalism that brings about AI. In fact much of the advances in it came about through academia, which is more of a socialist structure than capitalist.
I like the term “organic literature.” A significant amount of readers have no interest whatsoever in generated prose, so there is definitely a viable market in human provenance.
An independent certification body is quite an old-world solution for a problem like this, but I’m not sure this is something that can be done mathematically. A web of trust may be all we have.
Unfortunately, like most other kinds of commercial art, the mere presence of generated literature waters down the market enough to make actual literature essentially a leisure activity. Sure there was always crap, derivative filler books — it’s just that the ratio will now be 1000x worse and the better of the books just won’t justify the funding for intensive work and novel research that they used to, so even the good ones will probably be worse. Yet another example of the efficiency-obsessed more cheaper > less more expensive mentality making our world worse.
This idealistic objective is highly commendable, but the fight could be futile. As you would need AI to do the work of detection. Then there will be another movement to do "organic detection" of "organic content". And the story goes on.
Think of interview candidates rejected by AI and employees fired by AI, or that case where a snack pack was identified by AI as a weapon in a student's pocket. This will lead to "organic decision making".
Nothing futile about defense of humanity! Art forgers and technofrauds will never be true participants in culture.
> you would need AI to do the work of detection
Why?
This is inverted. AI books should come with warning labels similar to those found in cigarettes.
> AI books should come with warning labels
I disagree. AI use is diffuse. An author is specific. Having people label their work as AI free is accountable in a way trying to require AI-generated work be labeled is not.
> similar to those found in cigarettes
Hyperbole undermines your argument. We have decades of rigorous and international evidence for the harms from cigarettes. We don’t for AI.
Saying "I think X should have a warning, like cigarettes do" is not the making the claim "X is harmful in a way that is comparable to cigarettes." The similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes.
We need this for technical books. I was a chapter into something the other day before deciding I’d been hoodwinked into reading someone’s ChatGPT output
I've noticed entire publishers on Amazon which are just fly-by-night AI slop, probably printed on-demand too.
For example, I stumbled on https://www.amazon.com/dp/B0DT4TKY58 and had never heard of the author. Their page (https://www.amazon.com/stores/author/B004LUETE8) suggested they were incredibly prolific in a huge number of areas which already felt off. No information about "Robert Johnson" was available either. The publisher, HiTeX Press (https://www.amazon.com/s?k=HiTeX+Press) has a few other authors with similarly generic names and no information available about them, each the author of numerous books spanning a huge array of topics.
It feels even more bewildering and disheartening to see AI slop come into the physical world like this.
I always wondered if there was some way to make a "proof" that some piece of work was human created.
A recording of the entire process of it's creation is one possible answer (though how are deep fakes countered)
But maybe there is some cryptographic solution involving single direction provable timestamps..
Does anyone know of anyone working on such a thing?
> some piece of work was human created
Are thoughts and ideas creations? Or you just mean the literal typewriting?
How do you prove an idea is original and you have been in a vacuum not influenced _by anything at all_?
If anything The Hunger Games is the perfect example that you can get away with anything you want, and that was almost 20 years ago.
Everything is a remix https://www.youtube.com/watch?v=nJPERZDfyWc or if you hate your life https://tvtropes.org/
This depends on the subject of the book, but there are enough books written pre-1970 (or some other year one is comfortable with, before the era of “book spinners”, AI etc) to last multiple lifetimes. I used to spend hours and hours in bookstores, but so many books these days (AI or otherwise) don’t seem that interesting. Many, many books could just be 3 page articles, but stretched to 150 page books.
So yeah, simply filtering by year published could be a start
Buying a book scanner and frequenting used book stores seems like a past time to start that'll pay off in the long term.
They keep trying this with digital cameras signing the data and it's always a complete failure.
It's a social problem at heart and piling on yet more technology won't fix it.
Right, the same assholes gaming the system with slop would just game whatever system you tried to put around them. It's not like you can stand over someone the whole time they work to ensure it's real.
An authoring device akin to this, perhaps? https://roc.camera/
> wondered if there was some way to make a "proof" that some piece of work was human created
Self certification backed by a war chest to sue those who lie.
No need to invent more tech to mitigate techslop.
People will know by reputation alone, which cannot be fabricated.
Maybe they could prove it using blockchain!!!
This has the same problems any DRM has. People who want to bypass the process will find a way, but legitimate people get caught up in weird messes.
I'm so happy I'm not doing any school/academic work anymore, because AI writing detection tools (I learned English though reading technical docs; of course my writing style is a bit clinical) and checking the edit history in a Google Docs document would've both fucked me over.
Even if every single page was hand written on camera, that could not prove that no AI was used.
Did the author come up with the main ideas, character arcs or plot devices himself? Did he ever seek assistance from AI to come up with new plot points, rewrite paragraphs, create dialog?
The only thing which really matters is trust.
By this point we can also discuss what is trully original and what if creative work is just "stealing" ideas that other people "created" before.
(I don't have an answer, just wondering.)
I think for me I’m just going to accept that I won’t be reading any modern fiction, likely ever. It isn’t like there isn’t more than I could read in multiple lifetimes already out there that is pre, say, 2010. But the other side is that fiction has never been worse, because the commercial impetus to become a published fiction writer has never been lower (literally since before the 1600s, given functional literacy levels and the amount of fiction reading the average person does). The Steinbecks of the world aren’t writing novels in 2025.
I wonder how this works since authors are more and more likely to use AI to spell check, fix wording, find alternate words, and all manner of other things. It might be useful to understand the “rules” for what “human” means.
I too am open for business, for a modest fee I will arrange to meet a book publisher in nyc for a firm handshake to cement a declaration from them that they are publishing books not made with AI. I will then send a formal email saying they may publish a little gold star on their book, and my preeminence as a member of the literary elite should carry it through. I'm doing this for the people because I _care_.
Just another rent-seeker. I mostly choose books based on word of mouth recommendations or liking other things by the same author. This is very resistant to slop from AI and to the large amounts of rubbish that has always been published.
An organization with zero technical capability charging publishers recurring fees to certify something they can't actually verify?
So this is the thing that Zitron and Doctorow are always talking about? Naked grifting in the AI industry?
> can't actually verify
Why? Can't it be done same way it's done with copyrighted material: by checking the authors process?
(Because at least in EU law permits writing basically same thing, if both authors reached it organically - have a trail of drafts, other writing process documents. As long as you proved you came upon it without influence from the other author.)
Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Now, as for if anyone cares about such proofs is another topic.
I sneak out to the toilet and ask chatgpt what should happen in the next chapter. Or do you stick a camera there too?
>Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Which proves very little. It also would be something which authors would absolutely loath to do.
No technical ability required to verify humans as humans. You just have to close your laptop and meet at a coffee shop. Surprisingly many deals are done this way, because humans like other humans.
This is a big problem, though I would be slow to trust anyone purporting to address this problem. (Though, to their credit, this Books by People team is more credible than the bog-standard pair of 20yo Bay Area techbro grifters I expected.)
Reportedly, Kindle has already been flooded with "AI" generated books. And I've heard complaints from authors, of AI superficial rewritings of their own books being published by scammers. (So, not only "AI, write a YA novel, to the market, about a coming of age vampire young woman small town friends-to-lovers romance", but "AI, write a new novel in the style of Jane Smith, basically laundering previous things she's written" and "AI, copy the top-ranked fiction books in each category on Amazon, and substitute names of things, and how things are worded.")
For now, Kindle is already requiring publishers/authors to certify on which aspects of the books AI tools were used (e.g., text, illustrations, covers), something about how the tools were used (e.g., outright generation, assistive with heavy human work, etc.), and which tools were used. So that self-reporting is already being done somewhere, just not exposed to buyers yet.
That won't stop the dishonest, but at least it will help keep the honest writers honest. For example, if you, an honest writer, consider for a moment using generative AI to first-draft a scene, an awareness that you're required to disclose that generative AI use will give you pause, and maybe you decide that's not a direction you want to go with your work, nor how you want to be known.
Incidentally, I've noticed a lot of angry anti-generative-AI sentiment among creatives like writers and artists. Much more than among us techbros. Maybe the difference is that techbros are generally positioning ourselves to profit from AI, from copyright violations, selling AI products to others, and investment scams.
What is the point of this? Any publishing house can just "self certify" that no AI was used. Why would it be necessary to have an outside organization, who can not validate AI use anyway and just has to rely on the publisher.
Writing a book is, in most cases, something which happens between the author and their writing medium, how could any publisher verify anything about AI use, except in the most obvious cases?
The one thing which matters here is honesty and trust and I do not see how an outside organization could help in creating that honesty and maintaining that trust.
I don't care if AI wrote the book, if the book is good. The problem is that AI writes badly and pointlessly. It's not even a good editor, it 1) has no idea what you are talking about, and 2) if it catches some theme, it thinks the best thing to do is to repeat it over and over again and make it very, very clear. The reason you want to avoid LLM books is the same reason why you should avoid Gladwell books.
If a person who I know has taste signs off on a 100% AI book, I'll happily give it a spin. That person, to me, becomes the author as soon as they say that it's work that they would put their name on. The book has become an upside-down urinal. I'm not sure AI books are any different than cut-ups, other than somebody signed a cut-up. I've really enjoyed some cut-ups and stupid experiments, and really projected a lot onto them.
My experience in running things I've written through GPT-5 is that my angry reaction to its rave reviews, or its clumsy attempts to expand or rewrite, are stimulating in and of themselves. They often convince me to rewrite in order to throw the LLM even farther off the trail.
Maybe a lot of modern writers are looking for a certification because a lot of what they turn out is indistinguishable cliché, drawn from their experiences watching television in middle-class suburbs and reading the work of newspaper movie critics.
Lastly, everything about this site looks like it was created by AI.
> I don't care if AI wrote the book, if the book is good.
Not so sure. Books are not all just entertainment but they also develop one's ouook on life, relationships, morality etc. I mean, of course books can also be written by "bad" people to propagate their view of things, but at least you're still peeking into the views and distilled experience of a fellow human who lived a personal life.
Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers. Do that on a large enough scale and it's really hard to tell what kind of effect it has.
> Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers.
There is some of this even without AI. Plenty of modern pulpy thriller and romance books for example are highly market-optimised by now.
There are thousands of data points out there for what works and doesn't and it would be a very principled author who ignores all the evidence of what demonstrably sells in favour of pure self-expression.
Then again, AI allows to turbocharge the analysis and pluck out the variables that statistically trigger higher sales. I'd be surprised if someone isn't right now explicitly training a Content-o-matic model on the text of books along with detailed sales data and reviews. Perhaps a large pro-AI company with access to all the e-book versions, 20 years of detailed sales data, as well as all telemetry such as highlighted passages and page turns on their reader devices? Even if you didn't or couldn't use it to literally write the whole thing, you can have it optimise the output against expected sales.
Ugh, capitalism monifies yet another problem it created.
What about capitalism created AI? China is not a purely capitalistic society and they have AI too… I don’t see anything specific about capitalism that brings about AI. In fact much of the advances in it came about through academia, which is more of a socialist structure than capitalist.
State capitalism is still capitalism.
You’re really arguing a non-capitalist society would be incapable of developing LLMs?
No. You are making a big leap there. I just object to calling "communism with Chinese characteristics" anything but state capitalism.
Capitalism does however incentive unhealthy and self-destructive exploitation. Including of generative AI.
I’m curious why you see this as a problem created by capitalism, rather than another cause?
Would you have another cause you could posit? Surely the reason the marketplace is being flooded with AI generated books is because it's profitable.
Nothing besides greed could explain the extent to which people handwave and ignore the many obvious harms and failures and risks.
> Nothing besides greed
You’re really arguing greed (EDIT: and bad risk evaluation) didn’t exist before capitalism?
Is my cat capitalist?
Affirming the consequent.
Fair enough.