You break highlighting and copy-and-paste. If I want to share or comment on a piece of your website... I can't. I guess this can be a "feature" in some rare cases, but a major usability pain otherwise.
I'm not a fan of all the documentation and marketing content for this project evidently being AI-generated because I don't know which parts of it are the things you believe and designed for, and which are just LLM verbal diarrhea. For example, your GitHub threat model says this stops "AI training crawlers (GPTBot, ClaudeBot, CCBot, etc.)" - is this something you've actually confirmed, or just something that AI thinks is true? I don't know how their scrapers work; I'd assume they use headless browsers.
Copy-paste breaking is intentional for protected content but it's opt-in per component, not whole-site.
On the AI docs concern, fair point. To answer directly: I've confirmed the obfuscation defeats any scraper reading raw HTML via HTTP requests. Whether GPTBot or ClaudeBot use headless browsers internally, I honestly don't know. The README threat model lists headless browsers under "what it does NOT stop" for that reason.
Yep, that works. The data-o attributes are readable in the DOM so you can reverse it with custom code. That's in the threat model. The goal is raising the cost from "curl + cheerio" to "write a custom decoder per site." Most scrapers move on to easier targets.
Reminds me of when AOL broke all the script kiddy tools in 1996 by adding an extra space to the title of the window. I didn't have AOL, but my friend made one of those tools, and I helped him figure it out.
All I want is an API for my AI, you can ask me for my public key, if you want my human identity verified. The collateral damage of this bot hunting is the emergence of personal AIs. Do we really want that? It feels regressive.
(I see the hypocrisy here, we are fighting the scrapers that feed the LLMs that runs our personal agents)
You are not wrong. But the use case I keep seeing is companies with proprietary content they spent real money creating, who don't want it showing up in someone else's training data for free. It's less about bot hunting and more about content owners having a choice.
The bar for reverse engineering dropped to "paste the HTML into Claude and ask it to decode." That's partly why the v2 roadmap moves toward techniques where the readable text never exists in the DOM at all. Static obfuscation patterns need to keep evolving or they become a one-prompt solve.
You are more than welcome to do so. Please keep in mind the realistic goal is raising the cost of scraping. Most bots use simple HTTP requests, and we make that useless.
* use a screen reader for accessibility purposes (not just on the web, but on mobile too. Your 'light' obfuscation is entirely broken with TalkBack on Android. individual words/characters read, text is not a single block)
* use an RSS feed
* use reader mode in their browser
If you don't want your stuff to be read, and that includes bots, don't put it online.
> Built this because I got tired of AI crawlers reading my HTML in plain text while robots.txt did nothing.
You could have spent that time working on your project, instead of actively making the web worse than it already is.
The TalkBack issue is useful feedback, thank you. I tested with NVDA and VoiceOver but not TalkBack on Android. If light mode is reading individual words instead of a continuous block that's a real bug I want to fix.
On the broader point, I hear you, but I think there's a middle ground. Not all content is public knowledge. Some of it is premium, proprietary, or behind a paywall. The people publishing it should get to decide whether it becomes free training data.
Another thing you can do is to install a font with jumbled characters: "a" looks like "x", "b" looks like "n", and so on. Then instead of writing "abc" you write "jmw" and it looks like "abc" on the screen. This has been used as a form of DRM for eBooks.
It breaks copy/paste and screen readers, but so does your idea.
Font remapping is actually on the v2 roadmap. The reason v1 uses CSS ordering instead is it preserves screen reader access. Tradeoff is it's reversible (as another commenter just showed). Font remapping is stronger but breaks assistive tech. Solving both is the hard problem.
I hate everything about this, please use your time on this planet to make life better for people instead of worse.
It is better for a million AI crawlers to get through than for even one search index crawler, that might expose the knowledge on your site to someone who needs it, to be denied.
For public knowledge sites this would be the wrong tool entirely. The use case is more like paywalled articles, proprietary product data, or premium content that companies paid to create and don't want scraped into a competitor's training set. obscrd is opt-in per component, not a whole-site lockdown.
Fair concern. obscrd actually preserves screen reader access. CSS flexbox order is a visual reordering property, so assistive tech follows the visual order and reads the text correctly. Contact components use sr-only spans with clean text and aria-hidden on the obfuscated layer. We target WCAG 2.2 AA compliance.
Happy to have a11y experts poke at it and point out gaps.
Accessibility APIs have long been the royal road to automation. If scrapers were well-written they'd be using this already, but of course if scrapers were well-written they would scrape your site and you'd never notice.
The breadcrumb approach right now is simple invisible markers, not paraphrase-resistant watermarking. You're right that semantic watermarking that survives LLM rephrasing is the harder and more interesting problem. It's on the radar but not in scope for v1.
You break highlighting and copy-and-paste. If I want to share or comment on a piece of your website... I can't. I guess this can be a "feature" in some rare cases, but a major usability pain otherwise.
I'm not a fan of all the documentation and marketing content for this project evidently being AI-generated because I don't know which parts of it are the things you believe and designed for, and which are just LLM verbal diarrhea. For example, your GitHub threat model says this stops "AI training crawlers (GPTBot, ClaudeBot, CCBot, etc.)" - is this something you've actually confirmed, or just something that AI thinks is true? I don't know how their scrapers work; I'd assume they use headless browsers.
Copy-paste breaking is intentional for protected content but it's opt-in per component, not whole-site.
On the AI docs concern, fair point. To answer directly: I've confirmed the obfuscation defeats any scraper reading raw HTML via HTTP requests. Whether GPTBot or ClaudeBot use headless browsers internally, I honestly don't know. The README threat model lists headless browsers under "what it does NOT stop" for that reason.
Full user-agent string: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko); compatible; GPTBot/1.3;
Official OpenAI documentation: https://platform.openai.com/docs/gptbot
Yep, that works. The data-o attributes are readable in the DOM so you can reverse it with custom code. That's in the threat model. The goal is raising the cost from "curl + cheerio" to "write a custom decoder per site." Most scrapers move on to easier targets.
Reminds me of when AOL broke all the script kiddy tools in 1996 by adding an extra space to the title of the window. I didn't have AOL, but my friend made one of those tools, and I helped him figure it out.
All I want is an API for my AI, you can ask me for my public key, if you want my human identity verified. The collateral damage of this bot hunting is the emergence of personal AIs. Do we really want that? It feels regressive. (I see the hypocrisy here, we are fighting the scrapers that feed the LLMs that runs our personal agents)
You are not wrong. But the use case I keep seeing is companies with proprietary content they spent real money creating, who don't want it showing up in someone else's training data for free. It's less about bot hunting and more about content owners having a choice.
Nice. I have been working on something which utilizes obfuscation, honeypots etc and I have come to a few realizations-
- today you don't have to be a dedicated/motivated reverse engineer- you just need Sonnet 4.6 and let it do the work.
- you need to throw constant/new gotchas to LLMs to keep them on their tows while they try to reverse engineer your website.
The bar for reverse engineering dropped to "paste the HTML into Claude and ask it to decode." That's partly why the v2 roadmap moves toward techniques where the readable text never exists in the DOM at all. Static obfuscation patterns need to keep evolving or they become a one-prompt solve.
The irony of building an anti-AI project but writing your marketing and HN post with AI.
This is an interesting idea... it'd be a fun side project to implement enough of a CSS engine to undo this
You are more than welcome to do so. Please keep in mind the realistic goal is raising the cost of scraping. Most bots use simple HTTP requests, and we make that useless.
I too, hate people that:
* Copy text
* use a screen reader for accessibility purposes (not just on the web, but on mobile too. Your 'light' obfuscation is entirely broken with TalkBack on Android. individual words/characters read, text is not a single block)
* use an RSS feed
* use reader mode in their browser
If you don't want your stuff to be read, and that includes bots, don't put it online.
> Built this because I got tired of AI crawlers reading my HTML in plain text while robots.txt did nothing.
You could have spent that time working on your project, instead of actively making the web worse than it already is.
The TalkBack issue is useful feedback, thank you. I tested with NVDA and VoiceOver but not TalkBack on Android. If light mode is reading individual words instead of a continuous block that's a real bug I want to fix.
On the broader point, I hear you, but I think there's a middle ground. Not all content is public knowledge. Some of it is premium, proprietary, or behind a paywall. The people publishing it should get to decide whether it becomes free training data.
> Your content, obscured.
Is that supposed to be a good thing?
For content you want public, no.
couldn't read the hero text on my phone
it's white text and the shader background is also mostly white
Thanks, what phone/browser? I'll fix that.
Another thing you can do is to install a font with jumbled characters: "a" looks like "x", "b" looks like "n", and so on. Then instead of writing "abc" you write "jmw" and it looks like "abc" on the screen. This has been used as a form of DRM for eBooks.
It breaks copy/paste and screen readers, but so does your idea.
Font remapping is actually on the v2 roadmap. The reason v1 uses CSS ordering instead is it preserves screen reader access. Tradeoff is it's reversible (as another commenter just showed). Font remapping is stronger but breaks assistive tech. Solving both is the hard problem.
I hate everything about this, please use your time on this planet to make life better for people instead of worse.
It is better for a million AI crawlers to get through than for even one search index crawler, that might expose the knowledge on your site to someone who needs it, to be denied.
For public knowledge sites this would be the wrong tool entirely. The use case is more like paywalled articles, proprietary product data, or premium content that companies paid to create and don't want scraped into a competitor's training set. obscrd is opt-in per component, not a whole-site lockdown.
This is also what Facebook does.
Same result: screen readers and assistive software is rendered useless. Basically is a sign of "I hate disabled people, and AI too"
Fair concern. obscrd actually preserves screen reader access. CSS flexbox order is a visual reordering property, so assistive tech follows the visual order and reads the text correctly. Contact components use sr-only spans with clean text and aria-hidden on the obfuscated layer. We target WCAG 2.2 AA compliance.
Happy to have a11y experts poke at it and point out gaps.
Accessibility APIs have long been the royal road to automation. If scrapers were well-written they'd be using this already, but of course if scrapers were well-written they would scrape your site and you'd never notice.
I'm surprised that you don't appear to be using it on obscrd.dev lol
Well the information is not to hide, quiet the opposite haha. There is a Demo page
[dead]
[dead]
[flagged]
The breadcrumb approach right now is simple invisible markers, not paraphrase-resistant watermarking. You're right that semantic watermarking that survives LLM rephrasing is the harder and more interesting problem. It's on the radar but not in scope for v1.