This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.
"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model
It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
>GenAI is not used to replace or generate new talent performances
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.
I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.
I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.
Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity
Having spent some time in post-production, this reads more like a “please don’t get us sued”
This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
> get that shit outta there because we can’t own something we stole from someone else
How does anyone prove it though? You can say "does that matter?" but once everybody starts doing it, it becomes a different story.
Any one with a brain knows it is not stolen, but nevertheless the fact that people will claim so is a risk.
Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.
"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
Yeah, that's a fair assessment. The specific mention of "union-covered work" plays to that interpretation as well:
> GenAI is not used to replace or generate new talent performances or union-covered work without consent.
Yup. Everything will be muzak in the end.
But what word should we coin as buzzword for “Netflix-Muzak”?
And when we're saturated with it all, we'll start buying DVDs (or other future media) again.
> Using unowned training data (e.g., celebrity faces, copyrighted art)
How would one ever know that the GenAI output is not influenced or based on copyrighted content.
Getty and Adobe offer models that were trained only on images that they have the rights to. Those models might meet Netflix’s standards?
I kind of wonder if that even works.
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model
It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
All the indemnities I’ve read have clauses though that say if you intentionally use it to make something copyrighted they won’t protect you.
So if you put obviously copyrighted things in the prompt you’ll still be on your own.
Adobe Firefly absolutely has a spider man problem.
Netflix could also use or provide their own TV/movie productions as training data.
Lionsgate tried that and found that even their entire archive wasn't nearly enough to produce a useful model: https://www.thewrap.com/lionsgate-runway-ai-deal-ip-model-co... and https://futurism.com/artificial-intelligence/lionsgate-movie...
This amuses me.
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
Maybe now all that product placement is finally coming back to haunt them.
Netflix joins everyone else jumping on the "rules for thee, but not for me" train.
>GenAI is not used to replace or generate new talent performances
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
But the solution was never to ban technology
The part you quote is part of the list of conditions for an if-statement, so how could it be a lie?
The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.
I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.
> it's not for businesses to set the rules as if they owned the place.
This is for studios and companies that are producing content for Netflix.
If you want to sell to Netflix, you have to play by Netflix's rules.
Netflix has all kinds of rules and guidelines, including which camera bodies and lenses are allowed [1].
[1] https://partnerhelp.netflixstudios.com/hc/en-us/articles/360...
>I am thinking of building an association of AI consumers
The Gooner Association?
I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.
They do not want to be disrupted.
Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity