This is just dynamic pricing with extra steps. Airlines have done this for decades but at least they're transparent about it. The difference here is that readers don't know the person next to them is paying a different price for the same article. Once you start using behavioral data to set prices, the incentive flips from "make content worth paying for" to "figure out who's desperate enough to pay more." Not a great look for a newspaper that positions itself as a public service.
Do these models try to factor the target’s knowledge of what things cost, or maybe even their knowledge of dynamic pricing or discounting practices? That seems like it would not necessarily inversely correlate with wealth.
To use an extreme example, you’d have wanted your model to have offered Warren Buffet the base price, or even a deal.
Another situation where bad actors benefit. From the article:
> What really interests Cian, who has published research[1] exploring how audiences tend to have less trust in media outlets that are transparent about their AI use, is the fact that the Post disclosed its use of algorithmic pricing at all. “If you ask people [whether they] want transparency on what’s behind your pricing strategy, people say ‘yes,'” he says. “But what we found in my research is a paradox, in the sense that people think that they want to know, but once they know, the reaction is worse than not knowing.”
This is just dynamic pricing with extra steps. Airlines have done this for decades but at least they're transparent about it. The difference here is that readers don't know the person next to them is paying a different price for the same article. Once you start using behavioral data to set prices, the incentive flips from "make content worth paying for" to "figure out who's desperate enough to pay more." Not a great look for a newspaper that positions itself as a public service.
Do these models try to factor the target’s knowledge of what things cost, or maybe even their knowledge of dynamic pricing or discounting practices? That seems like it would not necessarily inversely correlate with wealth.
To use an extreme example, you’d have wanted your model to have offered Warren Buffet the base price, or even a deal.
Well this should be banned. Or at least watpo should be required to be transparent about this whenever you subscribe
Another situation where bad actors benefit. From the article:
> What really interests Cian, who has published research[1] exploring how audiences tend to have less trust in media outlets that are transparent about their AI use, is the fact that the Post disclosed its use of algorithmic pricing at all. “If you ask people [whether they] want transparency on what’s behind your pricing strategy, people say ‘yes,'” he says. “But what we found in my research is a paradox, in the sense that people think that they want to know, but once they know, the reaction is worse than not knowing.”
> [1] https://ideas.darden.virginia.edu/AI-disclosure-dilemma
> “This price was set by an algorithm using your personal data.”
How's that "I have nothing to hide" working out?
Light on details. Could be as simple as user who reads a couple articles a month gets a lower rate than someone who reads daily.