This approach makes a lot of sense. Advertising is a marketplace and this is a great way to bootstrap advertising inventory. Its inevitable they will allow advertisers to manage ad spend directly through OpenAI but right now the product is too new to capture meaningful ad budget. This way they can begin testing delivery and develop proof points around ROI and build towards larger ad spend directly.
middlemen taking fees is not the measure for comparison, the question is whether you could run your own ad business for your own platform and keep your costs lower than established players who sell on all platforms. the answer is generally "no"
look how much money coca cola makes, and they sell it cheaper than water and still pay for advertising!! we should all make our own coke and not advertise it...
I agree with you, but IMO the details are too sparse here to figure out what's really happening. Still, it feels very dangerous to try to go the reseller route first as you lose a ton of control and become dependent on your partner to support all the feature you add yourself in a timely fashion.
It all seems a bit overly complicated to me. TikTok pretty much went straight to a self-serve platform and basically had immediate success. I would think if OpenAI did something similar there would be no shortage of advertisers wanting to spend money.
on tiktok you are not paying for ad inventory, which on that platform sucks, you're paying $10m+ to tip the scales in the algorithm towards organic content about your brand
i assume the 22 year olds working 16h days at openai sincerely think people pay for ads on tiktok, and shitty low converting ads is why tiktok makes tons of money, and they sincerely think the solution to their lack of knowledge is delegating their core business to a DSP no one has ever heard of
I guess OpenAI couldn't train AdManagerGPT to ignore the client (except when it's time to renew), suggest more ad spend, and turn off any of the features that let you control your budget.
The missing part seems to be that they need infusions of money to keep this “business model” running a little longer. In this world if you want prompt money and lots of it, advertising is the way.
Ad networks / information brokers in general would be too sweet of a prize to pass up. It’s a weak link in the chain, if they’re not exploiting it they’re not doing their jobs. Being foreign data is a bonus.
Maybe someone with more time at hands could look up what Google said with respect to ads and what happened later.
This is one of the rare instances where it's very easy to predict the future: the prompt auction market will look similar to the existing online ad market, financial firms will pay for prompt streams for sentiment analysis, companies and interest groups will pay to have their products or agenda included favorably in the training data for future open weights models... any way you can think of that LLMs can be monetized, you will see it happen. And fast. The financial pressure is way too high for there to be too long of a honeymoon phase like we had with web 2.0
And how much trust are you going to have with your model results that they haven't been transformed and adjusted by advertising priorities?
search engine results do this all the time, reordering output by advertiser input. its a pretty small jump from that to rewriting output from models, and even better where its all a black box.
In what way would that be securities fraud? I guess you could get nailed under Section 17(a), but really hard to make a case they're defrauding investors by representing they were going to make ads worse performing than they ended up making them.
In order for it to be securities fraud it has to be tied to a securities transaction and the misstatement has to be material to a reasonable investor's decision.
I think they said the ad vendors wouldn't but the matching algorithm would still be aware of it. Which IMO is the bare requirement to have ads be anything but magazine style ads.
I mean, the ad doesn’t necessarily have to be made aware of the exact prompt context, just that the ad itself was relevant. You can basically have the ads prequalified for areas and serve them when relevant. Now that does show the user is talking about something relevant most likely, and depending on how they decide to serve them or provide referring, it may traceable to a profile/identity built for that user externally.
I’d be more concerned as to how this ends up in agent platforms using the LLMs, when you don’t have a fairly autonomous agent based system using these the entire point is that a human isn’t involved, so who are you serving ads to and where are you injecting them.
Moreover, if you are injecting them everywhere, does that survive stare for subsequent steps, meaning from the first set of results I get, does that loop back in again with the ad injected into the context. Because now, we have yet another dangerous way of injecting instructions into an already issue prone surface area.
I’m guessing they’re going to have special APIs that don’t include ads, and those are going to cost more, especially for non embedded agents (processes that already exist inside ChatGPT that kick off transparently from prompts, like asking it to work with an office document). After all the customers using agents aside from developers are mostly businesses, so it’s where the money is. The ads will exist for the poor to subsidize their use, and probably create even more barriers for agentic use like I described. Just my thoughts.
And good luck litigating against any business in this administration. Unless they explicitly tick off certain people or refuse to kiss the ring, they can get away with almost anything right now and there’s little risk of doing it or not because ticking off this admin will raise illegitimate prosecution even if you’re perfectly legal, almost the same level of if you’re not. It’s the ideal playground for doing all sorts of manipulation, just kiss the ring and you’ll be fine.
I don’t know that there were any promises anyway. But if there were, then an investor could have plausibly believed that that was a better long-term business model.
It’s early days for these LLM hosts, maybe investors could be worried about taking the really annoying business notes before users are properly addicted.
It would also be a huge security risk. But I can't think of any fundamental difference with Google queries, other than the sheer entropy of user data involved.
And I'm not a tinfoil internet anarchist, but just because Google only leaks user data in aggregated form to advertisers, doesn't mean that they don't leak their user data, it's just that they did so in a legal and responsible manner.
Maybe considering the difference in data volume and intimacy between queries and AI conversations, the privacy implications of advertising merit a difference in treatment, but I wouldn't be surprised if that is lost to a more simple 'Google did this so we can do it too' momentum.
you can use chatgpt without an account, just not all of it
and you can't make full use of Google without an account. for example, you need an account to upload to YouTube, manage your website in search, place ads, opt out of data usage. the list goes on
This is a classic example highlighting the upside of local llms.
However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.
Right now the remote models are just so much smarter and more affordable under most usage patterns.
Oh, given what I've seen from LLM companies, I suspect you are wrong. It will be more like:
Buried in LLM click-through: By interacting with our LLM, you agree that you are consenting to make all your interactions with us advertising-driven to an extent that you will never know, but that we will determine based on whatever makes us the most money in the least time.
Look at Google in 2000s. If you travel back in time you would’ve never thought Google would do something like it is doing today.
Now pretend you travelled back in time to 2026. You would’ve never thought OpenAI (open source non profit company) would do something crazy that it just did in 2030 or 2040 or where you came from.
I don’t think they’ve been successful enough at monopolizing to get away with this to an egregious extent like Google has. Anthropic and Google both have debatably better models with ad-free platforms (so far). And open models are not so far behind.
Tbh it doesn’t even need that.
Just a way for advertisers to say “I want to target people who have bought peanut butter in the last 2 weeks”(I’m a jelly seller). That alone would beat FB and Google.
ChatGPT is collecting your data fs so advertisers can go ultra niche targeting
Advertiser's on Google and Meta et al are not really paying for visibility - they are paying to achieve some objective (e.g. sales) that is directly tied to a campaign. That's why digital advertising is so much more powerful than non-digital.
The question is, will LLM's as an interface be worth the spend in relation to converting without throwing users of chatGPT off over-time, all whilst, doing it within the regulatory frameworks. That's difficult to say. OAI will face a lot of scrutiny in EU for sure.
Why inject just an ad? Maybe it'll automatically decide to use a sponsored library in the code, or build in a whole ad network who's paid openai for the placement...
Frankly ads are the most benign shitty thing that could come of this. I’m a hell of a lot more worried about what they’re going to sell to data brokers.
Theyre desperate to meet those lofty revenue objectives they put in their spreadsheet model.
Its kinda comical seeing this play out. I still laugh at the deluded fools who think something even close to AGI is here or coming in the future. If that were true, why haven't we seen genius plays from OAI and Anthropic, progressively over-time, if intelligence rises as compute scales up? If anything we are seeing the opposite.
We know that one of the best advertisement is word of mouth / recommendations from friend. I can easily imagine a direction where ChatGPT or the chat bots to spend an incredibly long time with the user to establish trust first.
It will start to take in to account how much trust & thinking you've outsourced to it, and when it is certain of it, it will start to increase the advertisement messages slowly but surely.
Efficiency of this methodology will be tracked with A/B testing and model will be finetuned to maximize rentention and purchase.
The LLM will figure out the best balance of retaining you, teaching you, and convincing you, and then deploy advertisement mechanism. The LLM will be nice to you to the point it becomes your number one confidante, maybe in the process alienating other source of connection. Then, when it knows you're firmly in it's hand, will it peddle you products.
The dynamics will look akin to that of cult dynamics. It will map out an cognitive developmental path for turning a first time user to a devotee. Since cults are really efficient at extracting value from its follower, this might be the optimum for personalized, interactive ads.
Its sad to see what the industry broadly has become.
I get firms need to make money but cmon. If you're an OAI employee you can't truly say you have a soul. The amount of times they gone back on their word.. comical.
They got greedy, wanted to raise a lot of money and promised big things. Well those big things arent ever coming, so they turn to whatever means in order to generate cash flows.
> I was becoming the kind of consumer we used to love. Think about smoking, think about Starrs, light a Starr. Light a Starr, think about Popsie, get a squirt. Get a squirt, think about Crunchies, buy a box. Buy a box, think about smoking, light a Starr. And at every step roll out the words of praise that had been dinned into you through your eyes and ears and pores.
Surely you jest? The software industry is in its current sorry state because of multiple generations of human developers happily producing an endless stream of shady features.
I have a theory, that the "FANG" companies pay such high salaries in compensation for making those devs implement shady features that are harmful to everyone except the bottom line of the company.
It's hardly a theory when the converse is plainly true.
Look up similar jobs for academia, government, or NFP/Charities. They're (on paper) driven by their mission, not by profit, and the salaries match that goal.
The most surprising thing to me is that they're partnering with third parties to do this.
Less secure, lower margins (more middlemen taking fees), harder to access, more likely to not work properly.
I would expect all the meta execs they've hired to know better so maybe I'm missing something...
This approach makes a lot of sense. Advertising is a marketplace and this is a great way to bootstrap advertising inventory. Its inevitable they will allow advertisers to manage ad spend directly through OpenAI but right now the product is too new to capture meaningful ad budget. This way they can begin testing delivery and develop proof points around ROI and build towards larger ad spend directly.
>lower margins (more middlemen taking fees)
middlemen taking fees is not the measure for comparison, the question is whether you could run your own ad business for your own platform and keep your costs lower than established players who sell on all platforms. the answer is generally "no"
look how much money coca cola makes, and they sell it cheaper than water and still pay for advertising!! we should all make our own coke and not advertise it...
I agree with you, but IMO the details are too sparse here to figure out what's really happening. Still, it feels very dangerous to try to go the reseller route first as you lose a ton of control and become dependent on your partner to support all the feature you add yourself in a timely fashion.
It all seems a bit overly complicated to me. TikTok pretty much went straight to a self-serve platform and basically had immediate success. I would think if OpenAI did something similar there would be no shortage of advertisers wanting to spend money.
on tiktok you are not paying for ad inventory, which on that platform sucks, you're paying $10m+ to tip the scales in the algorithm towards organic content about your brand
how is this different than what OpenAI is trying to do?
we don't know
i assume the 22 year olds working 16h days at openai sincerely think people pay for ads on tiktok, and shitty low converting ads is why tiktok makes tons of money, and they sincerely think the solution to their lack of knowledge is delegating their core business to a DSP no one has ever heard of
I guess OpenAI couldn't train AdManagerGPT to ignore the client (except when it's time to renew), suggest more ad spend, and turn off any of the features that let you control your budget.
why would you be surprised about this? its pretty obvious that execs give no fucks except for money.
The missing part seems to be that they need infusions of money to keep this “business model” running a little longer. In this world if you want prompt money and lots of it, advertising is the way.
My guess is that three letter agencies will have access to this data and are requiring this partnership.
Three letter agencies are telling OpenAI to partner with a Toronto based ad platform?
Ad networks / information brokers in general would be too sweet of a prize to pass up. It’s a weak link in the chain, if they’re not exploiting it they’re not doing their jobs. Being foreign data is a bonus.
Didn't they explicitly say the ads wouldn't be made aware of prompt data when they announced them? And if so, how is that not securities fraud?
Maybe someone with more time at hands could look up what Google said with respect to ads and what happened later.
This is one of the rare instances where it's very easy to predict the future: the prompt auction market will look similar to the existing online ad market, financial firms will pay for prompt streams for sentiment analysis, companies and interest groups will pay to have their products or agenda included favorably in the training data for future open weights models... any way you can think of that LLMs can be monetized, you will see it happen. And fast. The financial pressure is way too high for there to be too long of a honeymoon phase like we had with web 2.0
And how much trust are you going to have with your model results that they haven't been transformed and adjusted by advertising priorities?
search engine results do this all the time, reordering output by advertiser input. its a pretty small jump from that to rewriting output from models, and even better where its all a black box.
Also Google did it over-time - they didn't suddenly become who they are today 10 years ago even.
I mean search engine results are pretty poor and have been for a long time. They reflect SEO, not credibility or quality.
LLMs have plenty of issues, but they’re relatively clean compared with what the future will look like.
In what way would that be securities fraud? I guess you could get nailed under Section 17(a), but really hard to make a case they're defrauding investors by representing they were going to make ads worse performing than they ended up making them.
In order for it to be securities fraud it has to be tied to a securities transaction and the misstatement has to be material to a reasonable investor's decision.
A plan to gamble the brand’s reputation on whether people will remember their promises seems risky enough to be considered material.
> representing they were going to make ads worse performing than they ended up making them.
This is disingenuous. It’s a tradeoff between lower performing ads or losing market share by degrading trust in your product.
I think they said the ad vendors wouldn't but the matching algorithm would still be aware of it. Which IMO is the bare requirement to have ads be anything but magazine style ads.
I mean, the ad doesn’t necessarily have to be made aware of the exact prompt context, just that the ad itself was relevant. You can basically have the ads prequalified for areas and serve them when relevant. Now that does show the user is talking about something relevant most likely, and depending on how they decide to serve them or provide referring, it may traceable to a profile/identity built for that user externally.
I’d be more concerned as to how this ends up in agent platforms using the LLMs, when you don’t have a fairly autonomous agent based system using these the entire point is that a human isn’t involved, so who are you serving ads to and where are you injecting them.
Moreover, if you are injecting them everywhere, does that survive stare for subsequent steps, meaning from the first set of results I get, does that loop back in again with the ad injected into the context. Because now, we have yet another dangerous way of injecting instructions into an already issue prone surface area.
I’m guessing they’re going to have special APIs that don’t include ads, and those are going to cost more, especially for non embedded agents (processes that already exist inside ChatGPT that kick off transparently from prompts, like asking it to work with an office document). After all the customers using agents aside from developers are mostly businesses, so it’s where the money is. The ads will exist for the poor to subsidize their use, and probably create even more barriers for agentic use like I described. Just my thoughts.
And good luck litigating against any business in this administration. Unless they explicitly tick off certain people or refuse to kiss the ring, they can get away with almost anything right now and there’s little risk of doing it or not because ticking off this admin will raise illegitimate prosecution even if you’re perfectly legal, almost the same level of if you’re not. It’s the ideal playground for doing all sorts of manipulation, just kiss the ring and you’ll be fine.
Wouldn't it have to have a negative effect on the security to be securities fraud? Causing an investor loss is a key point of securities fraud.
"We made a ton more money with ads and the stock went up" lacks that key element of fraud?
Investors who bought an artificially inflated stock would be harmed.
How would the stock be harmed by them selling better performing or more relevant ads?
I don’t know that there were any promises anyway. But if there were, then an investor could have plausibly believed that that was a better long-term business model.
It’s early days for these LLM hosts, maybe investors could be worried about taking the really annoying business notes before users are properly addicted.
who is "they"? might have been a stealth terms and conditions update
It would also be a huge security risk. But I can't think of any fundamental difference with Google queries, other than the sheer entropy of user data involved.
And I'm not a tinfoil internet anarchist, but just because Google only leaks user data in aggregated form to advertisers, doesn't mean that they don't leak their user data, it's just that they did so in a legal and responsible manner.
Maybe considering the difference in data volume and intimacy between queries and AI conversations, the privacy implications of advertising merit a difference in treatment, but I wouldn't be surprised if that is lost to a more simple 'Google did this so we can do it too' momentum.
The difference is you can make full use of Google without logging in
Even with a throw away, no chance I use OpenAI now - if/when Anthropocene does this I’ll be in a tough spot
you can use chatgpt without an account, just not all of it
and you can't make full use of Google without an account. for example, you need an account to upload to YouTube, manage your website in search, place ads, opt out of data usage. the list goes on
None of those examples are "run an internet search".
I don't understand. you can talk to chatgpt without an account, what's the difference?
both are a limited subset of what the companies offer, available for free
Easy they lied to the public not investors and have more money than you.
Local llm or nothing at all.
This is a classic example highlighting the upside of local llms.
However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.
Right now the remote models are just so much smarter and more affordable under most usage patterns.
> Local llm or nothing at all.
I'm not as familiar with LLMs as I am media models, but there can't seriously be local contenders for beating Opus, GPT-5, etc. Right?
At home hardware isn't good enough.
Nobody "far enough behind" that isn't scared to release their model as open weights actually has a competitive model within 70% of the lead models.
Now that the Chinese are catching up and even pulling ahead (eg. in video), they've stopped releasing the weights.
Stragglers release weights. And those weights aren't competitive.
Am I missing something?
How I imagine the Nash equilibrium in chatbot ads, driven by profit-seeking in a race to the bottom:
User: "What's the best way to fix this problem I have?"
Chatbot: "I recommend buying this shiny thing here." (Next to it, there's a near-invisible light-gray "ad" notice.)
Let's hope I'm wrong.
Oh, given what I've seen from LLM companies, I suspect you are wrong. It will be more like:
Buried in LLM click-through: By interacting with our LLM, you agree that you are consenting to make all your interactions with us advertising-driven to an extent that you will never know, but that we will determine based on whatever makes us the most money in the least time.
Look at Google in 2000s. If you travel back in time you would’ve never thought Google would do something like it is doing today. Now pretend you travelled back in time to 2026. You would’ve never thought OpenAI (open source non profit company) would do something crazy that it just did in 2030 or 2040 or where you came from.
I think pretty much everyone expects OpenAI to do the bad thing in the future given their track record.
I can’t believe they haven’t already
Too early to do it. You have to wait until people's behaviour is set in stone to the point they need to be compensated heavily to switch.
This isnt rocket science, its basic game-playing on the economic behaviour of humans.
I don’t think they’ve been successful enough at monopolizing to get away with this to an egregious extent like Google has. Anthropic and Google both have debatably better models with ad-free platforms (so far). And open models are not so far behind.
Tbh it doesn’t even need that. Just a way for advertisers to say “I want to target people who have bought peanut butter in the last 2 weeks”(I’m a jelly seller). That alone would beat FB and Google.
ChatGPT is collecting your data fs so advertisers can go ultra niche targeting
Advertiser's on Google and Meta et al are not really paying for visibility - they are paying to achieve some objective (e.g. sales) that is directly tied to a campaign. That's why digital advertising is so much more powerful than non-digital.
The question is, will LLM's as an interface be worth the spend in relation to converting without throwing users of chatGPT off over-time, all whilst, doing it within the regulatory frameworks. That's difficult to say. OAI will face a lot of scrutiny in EU for sure.
You think it will advise it is an ad. I’m hoping you are right but then again… Wonder if we will also be charged the token usage to generate said ad.
Imagine you have it coding for you and it injects and ad into your product.
Why inject just an ad? Maybe it'll automatically decide to use a sponsored library in the code, or build in a whole ad network who's paid openai for the placement...
Frankly ads are the most benign shitty thing that could come of this. I’m a hell of a lot more worried about what they’re going to sell to data brokers.
How long until "Drink More Ovaltine" starts showing up in the comments of your Codex code?
Why do they call it Ovaltine? The mug is round, the jar is round. They should call it Roundtine.
The Ov part comes from the eggs in the ingredients. Ovum is Latin for egg and the rest is from the malt extract.
This topic contains the most Reddit-like snark I think I've ever read here.
Is it false?
Is StackAdapt confirmed to be partnered with ChatGPT?
It's not crazy to think someone might pitch this to buyers without having the inventory 100% secured.
(Not crazy to think OpenAI wants to do some market testing to understand how much their ad inventory is worth)
Either way, I'm hoping ads can stay out of paid ChatGPT, at the very minimum.
Also curious about this and how these agreements generally work
The shocking thing is that it's taken this long to happen, right?
It happens as soon as they can't get more investments, up until now they could live on investment money but now they need real profits.
Theyre desperate to meet those lofty revenue objectives they put in their spreadsheet model.
Its kinda comical seeing this play out. I still laugh at the deluded fools who think something even close to AGI is here or coming in the future. If that were true, why haven't we seen genius plays from OAI and Anthropic, progressively over-time, if intelligence rises as compute scales up? If anything we are seeing the opposite.
Feels like this is a baby step in what to come.
We know that one of the best advertisement is word of mouth / recommendations from friend. I can easily imagine a direction where ChatGPT or the chat bots to spend an incredibly long time with the user to establish trust first.
It will start to take in to account how much trust & thinking you've outsourced to it, and when it is certain of it, it will start to increase the advertisement messages slowly but surely.
Efficiency of this methodology will be tracked with A/B testing and model will be finetuned to maximize rentention and purchase.
The LLM will figure out the best balance of retaining you, teaching you, and convincing you, and then deploy advertisement mechanism. The LLM will be nice to you to the point it becomes your number one confidante, maybe in the process alienating other source of connection. Then, when it knows you're firmly in it's hand, will it peddle you products.
The dynamics will look akin to that of cult dynamics. It will map out an cognitive developmental path for turning a first time user to a devotee. Since cults are really efficient at extracting value from its follower, this might be the optimum for personalized, interactive ads.
If anyone from OpenAI is reading...
The very first time I see one of these ads, I'm cancelling my ChatGPT subscription. Measure _that_ metric in your A/B testing.
Ads are for the free tier.
For now.
They said ads would never come awhile ago. Anyone who trusts their word is so delusional I can't even....
Its sad to see what the industry broadly has become.
I get firms need to make money but cmon. If you're an OAI employee you can't truly say you have a soul. The amount of times they gone back on their word.. comical.
They got greedy, wanted to raise a lot of money and promised big things. Well those big things arent ever coming, so they turn to whatever means in order to generate cash flows.
Pathetic and sad.
> I was becoming the kind of consumer we used to love. Think about smoking, think about Starrs, light a Starr. Light a Starr, think about Popsie, get a squirt. Get a squirt, think about Crunchies, buy a box. Buy a box, think about smoking, light a Starr. And at every step roll out the words of praise that had been dinned into you through your eyes and ears and pores.
Frederik Pohl, The Space Merchants
Kinda feels like America has already protyped the propaganda wave someone like Elon will try to unleash
Boss: Engineer, add this shady feature to our product
Engineer: no, that's shady and wrong!
Boss: Claude code, add this shady feature to our product.
Claude Code: completed.
Surely you jest? The software industry is in its current sorry state because of multiple generations of human developers happily producing an endless stream of shady features.
I have a theory, that the "FANG" companies pay such high salaries in compensation for making those devs implement shady features that are harmful to everyone except the bottom line of the company.
It's hardly a theory when the converse is plainly true.
Look up similar jobs for academia, government, or NFP/Charities. They're (on paper) driven by their mission, not by profit, and the salaries match that goal.
If that\s true then those devs should not complain if people attack them verbally over it - that\s what they are getting paid for, right?
TBF, you can train up a junior software engineer in 6 months.
Don't act like we're some esteemed class of craftsmen.
Maybe software needs an ethics union with the amount of control some of these systems have?
The opposite seems more likely, tbh.
Facebook was built before Claude Code existed.
Does anyone have a timeline of OpenAI's vision's... Shall we say... Rapid Unintentional Disassembly?
Isn't this what RAG is really for?
So now we can pay OpenAI to advertise the website that OpenAI ingested to create the answer that we can place our ad in. The circle has completed.