Are they going to try to make a "we're just a platform, don't shoot the messenger" section 230 argument (not sure what the equivalent in Canada is) for the AI overviews they generate? Seems like a bridge too far. Really hopeful the courts will side with Ashley MacIsaac here, and set some sane precedent.
This is especially troubling from a sociological perspective, as it points to how AIs turn malice into false history.
Ashley MacIsaac made waves in the nineties for being openly gay, and he paid his dues for years. I vividly recall being around a barroom table in the late nineties, listening to this specific slander. We knew it was slander though, because there was no evidence. We had no machine yet to confabulate it.
This is what we anglos do to our men who prefer men. We did it with Wilde, and with Turing, and we did it with MacIsaac, and we are doing it even harder in 2026 than in 1996, because what we called freedom is now called "woke", and what was called dictatorship is now called "freedom".
If Anthropic can implement a regular expression to monitor for user frustration, Google have certainty got the chops to have some sort of heuristic to check for strongly negative statements.
Parents have to pay penalties when their underaged children burn down a building.
Companies that get treated with the rights of people should also have the responsibilities of people. Google designed, built, hosted, and promoted their LLM prominently. Logically, it follows that they should be personally and financially responsible for any harms their LLM causes.
Are they going to try to make a "we're just a platform, don't shoot the messenger" section 230 argument (not sure what the equivalent in Canada is) for the AI overviews they generate? Seems like a bridge too far. Really hopeful the courts will side with Ashley MacIsaac here, and set some sane precedent.
There isn't one.
"AI can make mistakes, so double-check responses."
This is especially troubling from a sociological perspective, as it points to how AIs turn malice into false history.
Ashley MacIsaac made waves in the nineties for being openly gay, and he paid his dues for years. I vividly recall being around a barroom table in the late nineties, listening to this specific slander. We knew it was slander though, because there was no evidence. We had no machine yet to confabulate it.
This is what we anglos do to our men who prefer men. We did it with Wilde, and with Turing, and we did it with MacIsaac, and we are doing it even harder in 2026 than in 1996, because what we called freedom is now called "woke", and what was called dictatorship is now called "freedom".
And you're next, dear reader.
> Google should not have lesser liability because the defamatory statements were published by software that Google created and controls.
Therein lies the rub. Google does not control what its parrot spouts. No-one does.
If Anthropic can implement a regular expression to monitor for user frustration, Google have certainty got the chops to have some sort of heuristic to check for strongly negative statements.
That’s one perspective.
It’s wrong.
But it’s definitely a perspective.
Parents have to pay penalties when their underaged children burn down a building.
Companies that get treated with the rights of people should also have the responsibilities of people. Google designed, built, hosted, and promoted their LLM prominently. Logically, it follows that they should be personally and financially responsible for any harms their LLM causes.
Sure they should have the responsibility. Even more so given they don't have control.
ah well, no worries then