And, just because things are moving so fast, agentic frameworks crawl in real time while helping the user. It's not just about training models, which everyone gets stuck on talking about. I think the agentic framework crawls will probably get worse by a lot.
I wonder if society (and by extension, our laws) will ever again make a meaningful effort to penalize liars, manipulators, and thieves. I worry the answer is no.
Assholes will rationalize any way they can, and a lot of the population is "set up" to hear these excuses and evaluate them. So, for a small percentage of assholes, they will have such good excuses nobody holds them accountable.
Funny how calling out well-dressed manipulation bothers some people more than the manipulation itself. Almost like some folks need the illusion to stay intact.
Yes, that's an important thing to worry about. I'm just not sure that "learning from a website's content how to create other intellectual works without explicit permission from the owner to do so" counts as lying, manipulating, or stealing.
Disagreeing with me doesn't mean my criticism is attacking a strawman. That's not what the term means. The websites are, in fact, permitting you to view them, while insisting you not learn anything from the content.
That's not fundamentally different from when employers "explicitly refuse" you learning from your job with them to use at the next one. Sure, they certainly want that, but the law doesn't recognize it as a valid constraint (except for e.g. trade secrets and proprietary knowledge).
My argument was that explicitly agreeing not to collect someone's data for AI training, then collecting data for AI training, is lying. You argued that collecting data without explicit agreement is, actually, not lying. Arguing with an easy claim no one made is the definition of a straw-man response.
Look, just have courtesy for others and don't argue in bad faith, the snark included. This community came up with the HN guidelines, let's try to follow them more. That's all I wanted to say. All the best.
https://www.msn.com/en-us/money/other/google-can-train-searc...
https://archive.is/1l8SS
And, just because things are moving so fast, agentic frameworks crawl in real time while helping the user. It's not just about training models, which everyone gets stuck on talking about. I think the agentic framework crawls will probably get worse by a lot.
> Google Can Train Search AI with Web Content Even with Opt-Out
Opt out for Google, Facebook and Microsoft is Opt in.
I wonder if society (and by extension, our laws) will ever again make a meaningful effort to penalize liars, manipulators, and thieves. I worry the answer is no.
Assholes will rationalize any way they can, and a lot of the population is "set up" to hear these excuses and evaluate them. So, for a small percentage of assholes, they will have such good excuses nobody holds them accountable.
Funny how calling out well-dressed manipulation bothers some people more than the manipulation itself. Almost like some folks need the illusion to stay intact.
You hit the nail in the head with your last sentence. It is a psychological defense mechanism.
People don't want to be associated with fraud and would do any mind tricks to explain things away, while knowing the illusion is there.
Yes, that's an important thing to worry about. I'm just not sure that "learning from a website's content how to create other intellectual works without explicit permission from the owner to do so" counts as lying, manipulating, or stealing.
Please don't straw-man. The first two paragraphs of the article explain what is happening. There is explicit refusal.
Disagreeing with me doesn't mean my criticism is attacking a strawman. That's not what the term means. The websites are, in fact, permitting you to view them, while insisting you not learn anything from the content.
That's not fundamentally different from when employers "explicitly refuse" you learning from your job with them to use at the next one. Sure, they certainly want that, but the law doesn't recognize it as a valid constraint (except for e.g. trade secrets and proprietary knowledge).
My argument was that explicitly agreeing not to collect someone's data for AI training, then collecting data for AI training, is lying. You argued that collecting data without explicit agreement is, actually, not lying. Arguing with an easy claim no one made is the definition of a straw-man response.
Look, just have courtesy for others and don't argue in bad faith, the snark included. This community came up with the HN guidelines, let's try to follow them more. That's all I wanted to say. All the best.
[flagged]
Why they need to lie then. There most be some benefit from that and then the theft itself does matter. Its the lie.
I kind of agree, but I do think it’s different. It’s more akin to pirating and then selling without attribution or royalties.
[flagged]