Apparently, Tavis Ormand made something of a death threat against ffmpeg.
"Let's hope a bug in some 1990s game codec doesn't get some ffmpeg core developer popped." https://x.com/FFmpeg/status/1985335035818369164 I can't find the original, I think he either deleted it or it got reported.
Maybe I'm crazy, but I think some security researchers really do believe their idea of security trumps everything, including the lives of maintainers.
Thus, as Mark Atwood, an open source policy expert, pointed out on Twitter, he had to keep telling Amazon to not do things that would mess up FFmpeg because, he had to keep explaining to his bosses that “They are not a vendor, there is no NDA, we have no leverage, your VP has refused to help fund them, and they could kill three major product lines tomorrow with an email. So, stop, and listen to me … ”
I agree with the headline here. If Google can pay someone to find bugs, they can pay someone to fix them. How many time have managers said "Don't come to me with problems, come with solutions"
I've been a proponent of upstreaming fixes for open source software.
Why?
- It makes continued downstream consumption easier, you don't have to rely on fragile secret patches.
- It gives back to projects that helped you to begin with, it's a simple form of paying it forward.
- It all around seems like the "ethical" and "correct" thing to do.
Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I have a very distinct recollection of talks about hardware aspirations and upstreaming software fixes at a large company. The cultural response was jarring.
As yet, Valve is the only company I know of doing this, and it's paying off in dividends both for Linux and for Valve. In just 5ish years of Valve investing people and money into Linux- specifically mesa and WINE, Linux has gone from a product that is kind of shaky with Windows, to "I can throw a windows program or game at it and it usually works". Imagine how further the OSS ecosystem would be if Open Source hadn't existed, only FOSS; and companies were legally obligated to either publish source code or otherwise invest in the ecosystem.
> Valve is the only company I know of [upstreaming fixes for open source software]
Sorry, that's ridiculous. Basically every major free software dependency of every major platform or application is maintained by people on the payroll of one or another tech giant (edit: or an entity like LF or Linaro funded by the giants, or in a smaller handful of cases a foundation like the FSF with reasonably deep industry funding). Some are better than others, sure. Most should probably be doing more. FFMpeg in particular is a project that hasn't had a lot of love from platform vendors (most of whom really don't care about software codecs or legacy formats anymore), and that's surely a sore point.
But to pretend that SteamOS is the only project working with upstreams is just laughable.
From my time working at a Fortune 100 company, if I ever mentioned pushing even small patches to libraries we effing used, I'd just be met "try to focus on your tickets". Their OSS library and policies were also super byzantine, seemingly needing review of everything you'd release, but the few times I tried to do it the official way, I just never heard anything back from the black-hole mailing list you were supposed to contact.
Yes, I've also worked on OpenStack components at a university, and there I see Red Hat or IBM employees pushing up loads of changes. I don't know if I've ever seen a Walmart, UnitedHealth, Chase Bank, or Exxon Mobil (to pick some of the largest companies) email address push changes.
Sure, but the parent’s comment hits on something perhaps. All the tech giants contribute more haphazardly and for their own internal uses.
Valve does seem somewhat rare position of making a proper Linux distro work well with games. Google’s Chromebooks don’t contribute to the linux ecosystem in the same holistic fashion it seems.
I upstreamed a 1-line fix, plus tests, at my previous company. I had to go through a multi-month process of red tape and legal reviews to make it happen. That was a discouraging experience to say the least.
My favorite is when while you were working through all that, the upstream decided they need a CLA. And then you have to go through another round of checking to see if your company thinks it's ok for you to agree to sign that for a 1 line change.
Certainly easier to give a good bug report and let upstream write the change, if they will.
I've literally had my employer's attorneys tell me I can't upstream patches because it would put my employer's name on the project, and they don't want the liability.
No, it didn't help giving them copies of licenses that have the usual liability clauses.
It seems a lot of corporate lawyers fundamentally misunderstand open source.
Corporate counsel will usually say no to anything unusual because there's no personal upside for them to say yes. If you escalate over their heads with a clear business case then you can often get a senior executive to overrule the attorneys and maybe even change the company policy going forward. But this is a huge amount of extra unpaid work, and potentially politically risky if you don't have a sold management chain.
Sounds like your employers attorneys need to be brought to heel by management. Like most things, this is a problem of management not understanding that details matter.
Maybe I've just gotten lucky, but at companies I've worked for I've usually gotten the go-ahead to contribute upstream on open source projects so as long as it's something important for what we are working on. The only reason I didn't do a whole lot as part of my work at Google is because most of the open source projects I contributed to at Google were Google projects that I could contribute to from the google3 side, and that doesn't count.
In a follow-up tweet, Mark Atwood eloborates: "Amazon was very carefully complying with the licenses on FFmpeg. One of my jobs there was to make sure the company was doing so. Continuing to make sure the company was was often the reason I was having a meeting like that inside the company."
I interpret this as meaning there was an implied "if you screw this up" at the end of "they could kill three major product lines with an email."
Are you interpreting that as "if we violate the license, they can revoke our right to use the software" ?? And they use it in 3 products so that would be really bad. That would make sense to have a compliance person.
Easy: ffmpeg discontinues or relicenses some ffmpeg functionality that AWS depends on for those product alines and AWS is screwed. I've seen that happen in other open source projects.
And then the argument for refusing to just pay ffmpeg developers gets even more flimsy.
The entire point here is to pay for the fixes/features you keep demanding, else the project is just going to do as it desires and ignore you.
More and more OSS projects are getting to this point as large enterprises (especially in the SaaS/PaaS spheres) continue to take advantage of those projects and treat them like unpaid workers.
Not really. Their whole reason for not funding open source is it essentially funds their competitors who use the same projects. That's why they'd rather build a closed fork in-house than just hand money to ffmpeg.
It's a dumb reason, especially when there are CVE bugs like this one, but that's how executives think.
They COULD, but history has shown they would rather start and maintain their own fork.
It might not make sense morally, but it makes total sense from a business perspective… if they are going to pay for the development, they are going to want to maintain control.
I always like to point out that "Open Source" was a deliberate watering-down of the moralizing messaging of Free Software to try and sell businesses on the benefits of developing software in the open.
> We realized it was time to dump the confrontational attitude that has been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds that motivated Netscape.
There should be a "if you use this product in a for-profit environment, and you have a yearly revenue of $500,000,000,000+ ... you can afford to pay X * 100,000/yr" license.
That's the Llama license and yeah, a lot of people prefer this approach, but many don't consider it open source. I don't either.
In fact, we are probably just really lucky that some early programmers were kooky believers in the free software philosophy. Thank God for them. So much of what I do owes to the resulting ecosystem that was built back then.
A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.
something more dangerous would be "amazon is already breaking the license, but the maintainers for now havent put in the work to stop the infringement"
Yes, definitely. I was just saying that if the license ever did change, they would move to an in-house library. In fact, they would probably release the library for consumer use as an AWS product.
Relicensing isn't necessary. If you violate the GPL with respect to a work you automatically lose your license to that work.
It's enough if one or two main contributors assert their copyrights. Their contributions are so tangled with everything else after years of development that it can't meaningfully be separated away.
I don’t know about ffmpeg, but plenty of OSS projects have outlined rules for who/when a project-wide/administrative decision can be made. It’s usually outlined in a CONTRIB or similar file.
I'd guess Prime Video heavily relies on ffmpeg, then you got Elastic Transcode and the Elemental Video Services. Probably Cloudfront also has special things for streaming that rely on ffmpeg.
The "kill it with an email" probably means that whoever said this is afraid that some usecase there wouldn't stand up to an audit by the usual patent troll mothercluckers. The patents surrounding video are so complex, old and plentiful that I'd assume full compliance is outright impossible.
AWS MediaConvert as well which is a huge API (in surface it covers) which is under Elemental but is kinda it's own thing - willing to bet (though I don't know) that that is ffmpeg somewhere underneath.
The API manual for it is nearly 4000 pages and it can do insane stuff[1].
I had to use it at last job(TM), it's not terrible API wise.
> "Don't come to me with problems, come with solutions"
The problem is, the issue in the article is explicitly named as "CVE slop", so if the patch is of the same quality, it might require quite some work anyway.
The linked report seems to me to be the furthest thing from "slop". It is an S-tier bug report that includes a complete narrative, crash artifacts, and detailed repro instructions. I can't believe anyone is complaining about what is tied for the best bug report I have ever seen. https://issuetracker.google.com/issues/440183164?pli=1
But it's also a bug report about the decoder for "SANM ANIM v0" - a format so obscure almost all the search results are the bug report itself. Possibly a format exclusive to mid-1990s LucasArts games [1]
Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
I can understand volunteers not wanting to sink time into maintaining a codec to play a video format that hasn't been used since the Clinton administration. gstreamer divides their plugins into 'good', 'bad' and 'ugly' to give them somewhere to stash unmaintained codecs.
It's a codec that is enabled by default at least on major Linux distributions, and that will be processed by ffmpeg without any extra flags. Anyone playing an untrusted video file without explictly overriding the codec autodetection is vulnerable.
The format being obscure and having no real usage doesn't help when it's the attackers creating the files. The obscure formats are exposing just as much attack surface as the common ones.
> Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
Sure, it's a valid bug report. But I don't understand why there has been so much drama over this when all the ffmpeg folks have to do is say "sorry, this isn't a priority for us so we'll get to it as soon as we can" and put the issue in the backlog as a low priority. If Google wants the issue fixed faster, they can submit a fix. If they don't care enough to do that, they can wait. No big deal either way. Instead, ffmpeg is getting into a public tiff with them over what seems to be a very easily handled issue.
Yes, you're very right. They could simply have killed a codec that no one uses anymore. Or put it behind a compile flag, so if you really want, you can still enable it
But no. Intentionally or not, there was a whole drama created around it [1], with folks being blamed [2] for saying exactly what you said above, only because their past (!) employers.
Instead of using the situation to highlight the need for more corporate funding for opensource projects in general, it became a public s**storm, and developers questioning their future contributions.
I get that the ffmpeg people have limited time and resources, I get that it would be nice if Google (or literally anyone else) patched this themselves and submitted that upstream. But "everyone else down stream of us should compile out our security hole" is a terrible way to go about things. If this is so obscure of a bug that there's no real risk, then there's no need for anyone to worry that the bug has been reported and will be publicized. On the other hand, if it's so dangerous that everyone should be rebuilding ffmpeg from source and compiling it out, then it really needs to be fixed in the up stream.
Edit: And also, how is anyone supposed to know they should compile the codec our unless someone makes a bug report and makes it public in the first place?
There are dozens if not hundreds of issues just like this one in ffmpeg, except for codecs that are infinitely more common. Google has been running all sorts of fuzzers against ffmpeg for over a decade at this point and it just never ends. It's a 20 year old C project maintained by poorly funded volunteers that mostly gives every media file ever the be-liberal-in-what-you-accept treatment, because people complain if it doesn't decode some bizarrely non-standard MPEG4 variant recorded with some Chinese plastic toy from 2008. Of course it has all of the out-of-bounds bugs. I poked around on the issue tracker for like 5 minutes and found several "high impact" issues similar to the one in TFA just from the last two or three months, including at least one that hasn't passed the 90 day disclosure window yet.
Nobody who takes security even remotely seriously should decode untrusted media files outside of a sandboxed environment. Modern media formats are in themselves so complex one starts wondering if they're actually Turing complete, and in ffmpeg the attack surface is effectively infinitely large.
The issue is CVE slop because it just doesn't matter if you consider the big picture.
I don't get why you think linking to multiple legitimate and high quality bug reports with detailed analysis and precise reproduction instructions demonstrates "slop". It is the opposite.
This is software that is directly or indirectly run by millions of people on untrusted media files without sandboxing. It's not even that they don't care about security, it's that they're unaware that they should care. It should go without saying that they don't deserve to be hacked just because of that. Big companies doing tons of engineering work to add defense in depth for use cases on their own infrastructure (via sandboxing or disabling obsolete codecs) doesn't help those users. Finding and fixing the vulnerabilities does.
All of these reports are effectively autogenerated by Big Sleep from fuzzing.
Again, Google has been doing this sort of thing for over a decade and has found untold thousands of vulnerabilities like this one. It is not at all clear to me that their doing so has been all that valuable.
Yeah but as you can see from the bug report ffmpeg automatically triggers the codec based on file magic, so it is possible that if you run some kind of network service or anything that handles hostile data an attacker could trigger the bug.
I’m an open source maintainer, so I empathize with the sentiment that large companies appear to produce labor for unpaid maintainers by disclosing security issues. But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it, or would otherwise need to accept the reputational hit that comes with not triaging security reports. That’s sometimes perfectly fine (it’s okay for projects to decide that security isn’t a priority!), but you can’t have it both ways.
If google bears no role in fixing the issues it finds and nobody else is being paid to do it either, it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg.
I don’t think vulnerability researchers are having trouble finding exploitable bugs in FFmpeg, so I don’t know how much this actually holds. Much of the cost center of vulnerability research is weaponization and making an exploit reliable against a specific set of targets.
(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)
Yeah it's more effort, but I'd argue that security through obscurity is a super naive approach. I'm not on Google's side here, but so much infrastructure is "secured" by gatekeeping knowledge.
I don't think you should try to invoke the idea of naivete when you fail to address the unhappy but perfectly simple reality that the ideal option doesn't exist, is a fantasy that isn't actually available, and among the available options, even though none are good, one is worse than another.
"obscurity isn't security" is true enough, as far as it goes, but is just not that far.
And "put the bugs that won't be fixed soon on a billboard" is worse.
The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.
More fantasy. Presumes the bug only exists in some part of ffmpeg that can be disabled at all, and that you don't need, and that you are even in control over your use of ffmpeg in the first place.
Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.
I guess the question that a person at Google who discovers a bug they don’t personally have time to fix is, should they report the bug at all? They don’t necessarily know if someone else will be able to pick it up. So the current “always report” rule makes sense since you don’t have to figure out if someone can fix it.
The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.
In this case the bug was so obscure it’s kind of silly.
My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it? I mean someone could already be using the vulnerability regardless of what Google does.
>Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Not being told the existence of bugs is different from having a warranty on software. How would you submit a patch on a bug you were not aware of?
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Google publicly disclosing the bug doesn't only let affected users know. It also lets attackers know how they can exploit the software.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
How is having a disclosure policy so that you balance the tradeoffs between informing people and leaving a bug unreported "holding" anything over the heads of the maintainers? They could just file public bug reports from the beginning. There's no requirement that they file non-public reports first, and certainly not everyone who does file a bug report is going to do so privately. If this is such a minuscule bug, then whether it's public or not doesn't matter. And if it's not a minuscule bug, then certainly giving some private period, but then also making a public disclosure is the only responsible thing to do.
Have you ever used a piece of software that DID make guarantees about being safe?
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
I don't know what our contract terms were for security issues, but I've certainly worked on a product where we had 5 figure penalties for any processing errors or any failures of our system to perform its actions by certain times of day. You can absolutely have these things in a contract if you pay for it, and mass market software that you pay for likely also has some implied merchantability depending on jurisdiction.
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Point. As part of a negotiated contract, some companies might indeed put in guarantees of software quality; I've never worked in the nuclear industry or any other industries where that would be required, so my perspective was a little skewed. But all mass-distributed software I've ever seen or heard of, free or not, has that "no warranty" clause, and only individual contracts are exceptions.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
This program discloses security issues to the projects and only discloses them after they have had a "reasonable" chance to fix it though, and projects can request extensions before disclosure if projects plan to fix it but need more time.
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
Great, so Google is actively spending money on making open source projects better and more secure. And for some reason everyone is now mad at them for it because they didn't also spend additional money making patches themselves. We can absolutely wish and ask that they spend some money and resources on making those patches, but this whole thing feels like the message most corporations are going to take is "don't do anything to contribute to open source projects at all, because if you don't do it just right, they're going to drag you through the mud for it" rather than "submit more patches"
The user is vulnerable while the problem is unfixed. Google publishing a vulnerability doesn't change the existence of the vulnerability. If Google can find it, so can others.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If it is so easy to fix, then why doesn't Google fix it? So far they've spent more effort in spreading knowledge about the vulnerability than fixing it, so I don't agree with your assessment that Google is not actively making the world worse here.
I didn't say it was easy to fix. I said a publication made it easy to find it, if someone wanted to fix something.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Sure, but running a fuzzer on ancient codecs isn't that special. I can't do it, but if I wanted to learn how, codecs would be a great place to start. (in fact, Google did some of their early fuzzing work in 2012-2014 on ffmpeg [1]) Media decoders have been the vector for how many zero interaction, high profile attacks lately? Media decoders were how many of the Macromedia Flash vulnerabilities? Codecs that haven't gotten any new media in decades but are enabled in default builds are a very good place to go looking for issues.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
As clearly stated, most users of ffmpeg are unaware of them using it. Even them knowing about a vulnerability in ffmpeg, they wouldn't know they are affected.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
But how are those companies supposed to know they need to do anything unless someone finds and publicly reports the issue in the first place? Surely we're not advocating for a world where every vendor downstream of the ffmpeg project independently discovers and patches security vulnerabilities without ever reporting the issues upstream right?
I have about 100x as much sympathy for an open source project getting time to fix a security bug than I do a multibillion dollar company with nearly infinite resources essentially blackmailing a small team of developers like this. They could -easily- pay a dev to fix the bug and send the fix to ffmpeg.
Since when are bug reports blackmail? If some retro game enthusiast discovered this bug and made a blog post about it that went to the front page of HN, is that blackmail? If someone running a fuzzer found this bug and dumped a public bug report into github is that blackmail? What if google made this report privately, but didn't say anything about when they would make it public and then just went public at some arbitrary time in the future? How is "heads up, here's a bug we found, here's the reproduction steps for it, we'll file a public bug report on it soon" blackmail?
In my case, yes, but my pipeline is closed. Processes run on isolated instances that are terminated without haste as soon as workflow ends. Even if uncaught fatal errors occur, janitor scripts run to ensure instances are terminated on a fast schedule. This isn't something running on my personal device with random content that was provided by unknown someone on the interwebs.
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
AIUI, (lib)ffmpeg is used by practically everything that does anything with video, including such definitely-security-sensitive things as Chrome, which people use to play untrusted content all the time.
Ffmpeg is a versatile toolkit used in lot of different places.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
There are alternatives such as gstreamer and proprietary options. I can’t give names, but can confirm at least two moderately sized startups that use gstreamer in their media pipeline instead of ffmpeg (and no, they don’t use gst-libav).
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
If you use a trillion dollar AI to probe open source code in ways that no hacker could, you're kind of unearthing the vulnerabilities yourself if you disclose them.
> My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
The fact that details of the issue _will_ be disclosed publicly is an implicit threat. Sure it's not an explicit threat, but it's definitely an implicit threat. So the demand, too, is implicit: fix this before we disclose publicly, or else your vulnerability will be public knowledge.
When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers. All of this increases the pressure on the maintainers, and it's fair to call that a "demand" (quotes-included). Note that we are talking about humans who will only have their motivation dwindle: it's easy to say that they should be thick-skinned and ignore issues they can't objectively fix in a timely manner, but it's demoralizing to be called out like that when everyone knows you can't do it, and you are generally doing your best.
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers.
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
No, publishing the vulnerability is the right thing to do for a secure world because anyone can find this stuff including nation states that weaponize it. This is a public service. Giving the dev a 90 day pre warn is a courtesy.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
It's standard practice for commercially-sponsored software, and it doesn't necessarily fit volunteer maintained software. You can't have the same expectations.
Vulnerabilities should be publicly disclosed. Both closed and open source software are scrutinized by the good and the bad people; sitting on vulnerabilities isn't good.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
The entire conflict here is that norms about what's considered responsible were developed in a different context, where vulnerability reports were generated at a much lower rate and dedicated CVE-searching teams were much less common. FFmpeg says this was "AI generated bug reports on an obscure 1990s hobby codec"; if that's accurate (I have no reason to doubt it, just no time to go check), I tend to agree that it doesn't make sense to apply the standards that were developed for vulnerabilities like "malicious PNG file crashes the computer when loaded".
I think the discussion on what standard practice should be does need to be had. This seems to be throwing blame at people following the current standard.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
FFMPEG does autodetection of what is inside a file, the extension doesn't really matter. So it's trivial to construct a video file that's labelled .mp4 but is really using the vulnerable codec and triggers its payload upon playing it. (Given ffmpeg is also used to generate thumbnails in Windows if installed, IIRC, just having a trapped video file in a directory could be dangerous.)
It is accurate. This is a codec that was added for archival and digital preservation purposes. It’s like adding a Unicode block for some obscure 4000 year old dead language that we have a scant half dozen examples of writing.
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Google is a significant contributor to ffmpeg by way of VP9/AV1/AV2. It's not like it's a gaping maw of open-source abuse, the company generally provides real value to the OSS ecosystem at an even lower level than ffmpeg (which is saying a lot, ffmpeg is pretty in-the-weeds already).
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
So to be clear, if Google doesn't include patches, you would rather they don't make bugs they find in software public so other people can fix them?
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
I would love to see Google contribute here, but I think that's a different issue.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
But FFmpeg does not have the resources to fix these at the speed Google is finding them.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
A solution definitely ought to be found. Google putting up a few millionths of a percent of their revenue or so towards fixing the bugs they find in ffmpeg would be the ideal solution here, certainly. Yet it seems unlikely to actually occur.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
Many people are already developing and fixing FFmpeg.
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
Nobody is demanding anything. Google is just disclosing issues.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
This doesn't feel like a medium-severity bug, and I think "Perhaps reconsider the severity" is a polite reading. I get that it's a bug either way, but this leaves me with a vague feeling of the ffmpeg maintainer's time being abused.
On the other hand, if the bug doesn't get filed, it doesn't get fixed. Sure Google could spend some resources on fixing it themselves, but even if they did would we not likely see a complaint about google flooding the maintainers with PR requests for obscure 30 year old codec bugs? And isn't a PR even more of a demand on the maintainer's time because now there's actual code that needs to be reviewed, tests that need to be run and another person waiting for a response on the other end?
"Given enough eyeballs, every bug is shallow" right? Well, Google just contributed some eyeballs, and now a bug has been made shallow. So what's the actual problem here? If some retro game enthusiast had filed the same but report would that be "abusing" the maintainer's time? I would think not, but then we're saying that a bug report can be "abusive" simply by the virtue of who submits it. And I'm really not sure "don't assign employees to research bugs in your open source dependencies and if you do certainly don't submit bug reports on what you find because that's abusive" is the message we want to be sending to corporations that are using these projects.
> But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it
I think this is the heart of the issue and it boils off all of the unimportant details.
If it's a real, serious issue, you want to know about it and you want to fix it. Regardless of who reports it.
If it's a real, but unimportant issue, you probably at least want to track it, but aren't worried about disclosure. Regardless of who reports it.
If it's invalid, or AI slop, you probably just want to close/ignore it. Regardless of who reports it.
It seems entirely irrelevant who is reporting these issues. As a software project, ultimately you make the judgment call about what bugs you fix and what ones you don't.
But if it's a real, serious issue without an easy resolution, who is the burden on? It's not that the maintainers wouldn't fix bugs if they easily could. FFmpeg is provided "as is"[0], so everyone should be responsible for their side of things. It's not like the maintainers dumped their software on every computer and forced people to use it. Google should be responsible for their own security. I'm not adamant that Google should share the patch with others, but it would hardly be an imposition to Google if they did. And yes, I really do intend that you could replace Google with any party, big or small, commercial or noncommercial. It's painful, but no one has any inherent obligations to provide others with software in most circumstances.
[0] More or less. It seems the actual language is shied from. Is there a meaningful difference?
But if no bug report is filed, then only google gets the ability to "be responsible for their own security", everyone else either has to independently discover and then patch the bug themselves, or wait until upstream discovers the bug.
In no reasonable reading of the situation can I see how anything Google has done here has made things worse:
1) Before hand, the bug existed, but was either known by no one, or known only by people exploiting it. The maintainers weren't actively looking at or for this particular bug and so it may have continue to go undiscovered for another 20 years.
2) Then Google was the only one that knew about it (modulo exploiters) and were the only people that could take any steps to protect themselves. The maintainers still don't know so everyone else would remain unprotected until they discover it independently.
3) Now everyone knows about the issue, and are now informed to take whatever actions they deem appropriate to protect themselves. The maintainers know and can choose (or not) to patch the issue, remove the codec or any number of other steps including deciding it's too low priority in their list of todos and advising concerned people to disable/compile it out if they are worried.
#3 is objectively the better situation for everyone except people who would exploit the issue. Would it be even better if Google made a patch and submitted that too? Sure it would. But that doesn't make what they have done worthless or harmful. And more than that, there's nothing that says they can't or won't do that. Submitting a bug report and submitting a fix don't need to happen at the same time.
It's hard enough convincing corporations to spend any resources at all on contributing to upstream. Dragging them through the mud for not submitting patches in addition to any bug reports they file is in my estimation less likely to get you more patches, and more likely to just get you less resources spent on looking for bugs in the first place.
I wasn't really thinking about the disclosure part, although I probably should have. I was focusing on the patching side of things. I think you're correct that disclosure is good, but in that case, I think it increases the burden of those with resources to collaborate to produce a patch.
Well, it's open source and built by volunteers, so nobody is obligated to fix it. If FFmpeg volunteers don't want to fix it or don't have the time/bandwidth to fix it, then they won't fix it. Like any other bug or CVE in any other open source project. The burden doesn't necessarily need to be on anyone.
They aren't obligated to fix CVEs until they're exploited, and then, suddenly, they very much were obligated to fix the CVEs, and their image as FLOSS maintainers and as a project are very much tarnished.
The problem isn't Google reporting vulnerabilities. It's Google using AI to find obscure bugs that affect 2 people on the planet, then making a CVE out of it, without putting any effort into fixing it themselves or funding the project. What are the ffmpeg maintainers supposed to do about this? It's a complete waste of everybody's time.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
I don't think that's an accurate description of the full scope of the problem. The codec itself is mostly unused but the code path can possibly be triggered from file fuzzing that ffmpeg uses so a maliciously crafted payload (e.g. any run of ffmpeg that touches user input without disabling this codec) could possibly be exploited.
I feel this comment is far to shallow a take. I would expect that you know better than most of HN, exactly how much a reputation security has as a cost center. Google uses ffmpeg internally, how many millions would they have to spend if they were required to not only create, but maintain ffmpeg themselves? How significant would that cost be at Google's scale?
I dont agree the following framing is accurate, but I can mention it because you've already said the important part (about how this issue exists, and mearly knowing about it doesn't create required work.) But here announcing it, and registering a CVE, Google is starting the clock. By some metrics, it was already running, but the reputational risk clearly was not. This does change priorities, and requires as urgent context switch. neither are free actions, especially not within FOSS.
To me, being someone who believes everyone, individuals and groups, have a responsibility to contribute fairly. I would frame it as Google's behavior gives the appearance weaponizing their cost center externally, given this is something Google could easily fix, but instead they shirked that responsibility to unfunded volunteers.
To be clear, I think Google (Apple, Microsoft, etc.) can and should fund more of the OSS they depend on. But this doesn’t change the fact that vulnerability reports don’t create work per se, they just reveal work that the project can choose to act on or not.
1) dedicating compute resources to continuously fuzzing the entire project
2) dedicating engineering resources to validating the results and creating accurate and well-informed bug reports (in this case, a seriously underestimated security issue)
3) additionally for codecs that Google likely does not even internally use or compile, purely for the greater good of FFMPEG's user base
Needless to say, while I agree Google has a penny to spare to fund FFMPEG, and should (although they already contribute), I do not agree with funding this maintainer.
> - benefiting from the bugs getting fixed, but not contributing to them.
I would be very surprised if Google builds this codec when they build ffmpeg. If you run a/v codecs (like ffmpeg) in bulk, the first thing to do is sandbox the hell out of it. The second thing you do is strictly limit the containers and codecs you'll decode. Not very many people need to decode movies from old LucasArts games, for video codecs, you probably only want mpeg 1-4, h.26x, vp8, vp9, av1. And you'll want to have fuzzed those decoders as best you can too.
Nobody should be surprised that there's a security problem in this ancient decoder. Many of the eclectic codecs were written to mimic how the decoders that shipped with content were written, and most of those codecs were written assuming they were decoding a known good file, because why wouldn't they be. There's no shame, that's just how it is... there's too much to proactively investigate, so someone doing fuzzing and writing excellent reports that include diagnosis, specific location of the errors, and a way to reproduce are providing a valuable contribution.
Could they contribute more? Sure. But even if they don't, they've contributed something of value. If the maintainers can't or don't want to address it, that'd be reasonable too.
They could, but there is really no requirement on them to do so. The security flaw was discovered by Google, but it was not created by them.
Equally there is no requirement on ffmpeg to fix these CVEs nor any other.
And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
> They could, but there is really no requirement on them to do so.
I see this sort of sentiment daily. The sentiment that only what is strictly legal or required is what matters.
Sometimes, you know, you have to recognise that there are social norms and being a good person matters and has intrinsic value. A society only governed by what the written law of the land explicitly states is a dystopia worse than hell.
What's "strictly legal or required" of Google here is absolutely nothing. They didn't have to do any auditing or bug hunting. They certainly didn't have to validate or create a proper bug report, and there's no requirement whatsoever that they tell anyone about it at all. They could have found the bug, found it was being actively exploited, made their own internal patch and sat quietly by while other people remained vulnerable. All of that is well within what is "strictly legal or required".
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why? The world is objectively a better place for having this bug report, at least now people know there's something to address.
> And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
What's this even saying?
Then they're free to fork it and never use the upstream again.
It is my understanding that the commenters in FFMPEG's favor believe that Google is doing a disservice by finding these security vulnerabilities, as they require volunteer burden to patch, and that they should either:
1) allow the vulnerabilities to remain undiscovered & unpatched zero-days (stop submitting "slop" CVEs.)
2) supply the patches (which i'm sure the goalpost will move to the maintainers being upset that they have to merge them.)
3) fund the project (including the maintainers who clearly misunderstand the severity of the vulnerabilities and describe them as "slop") (no thank you.)
What is the mission of Project Zero? Is it to build a vulnerability database, or is it to fix vulnerabilities?
If it's to fix vulnerabilities, it seems within reason to expect a patch. If the reason Google isn't sending a patch is because they truly think the maintainers can fix it better, then that seems fair. But if Google isn't sending a patch because fixing vulns "doesn't scale" then that's some pretty weak sauce.
Maybe part of the solution is creating a separate low priority queue for bug reports from groups that could fix it but chose not to.
If you are deliberately shipping insecure software, you should stop doing that. In ffmpeg's case, that means either patching the bug, or disabling the codec. They refused to do the latter because they were proud of being able to support an obscure codec. That puts the onus on them to fix the bug in it.
I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
I do think that contributing fuzzing and quality bug reports can be beneficial to a project, but it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry.
Rather than going off and digging up ten time bombs which all start counting down together, how about digging up one and defusing it? Or even just contributing a bit of funding towards the team of people working for free to defuse them?
If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem. Not a lot of people set out to intentionally write insecure code. The only case that immediately comes to mind is the xz backdoor attempt, which again had a root cause of too few maintainers. I think figuring out a way to get constructive resources to these projects would be a much more impressive way to contribute.
This is a company that takes a lot of pride in being the absolute best of the best. Maybe what they're doing can be justified in some way, but I see why maintainers are feeling bullied. Is Google really being excellent here?
The ffmpeg authors aren't "shipping" anything; they're giving away something they make as a hobby with an explicit disclaimer of any kind of fitness for purpose. If someone needs something else, they can pay an engineer to make it for them.
> Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
Re-read the article. There's CVEs and then there's CVEs. This is the former, and they're shoving tons of those down the throats of unpaid volunteers while contributing nothing back.
What Google's effectively doing is like a food safety inspection company going to the local food bank to get the food that they operate their corporate cafeteria on just to save a buck, then calling the health department on a monthly basis to report any and all health violations they think they might have seen, while contributing nothing of help back to the food bank.
I have read the article. The expectation for a tool like ffmpeg is that regardless of what kind of file you put into it, it safely handles it.
This is an actual bug in submitted code. It doesn't matter that it's for some obscure codec, it's technically maintained by the ffmpeg project and is fair game for vulnerability reports.
Given that Google is also a major contributor to open-source video, this is more like a food manufacturer making sure that grocery stores are following health code when they stock their food.
Mind you, the grocery store has no obligation to listen to them in this metaphor and is free to just let the report/CVE sit for a while.
This is why many have warned against things like MIT licence. Yes, it gives you source code and does easily get incorporated into a lot of projects but it comes at the cost of potential abuse.
Yes, GPL 3 is a lot ideologically but it was trying to limit excessive leeching.
Now that I have opened the flood gates of a 20 year old debate, time to walk away.
Google Project Zero just looks for security issues in popular open source packages, regardless of if Google itself even uses those packages or not.
So I'm not sure what GPLv3 really has to do with it in this case, if it under was a "No billion dollar company allowed" non-free-but-source-available license, this same thing would have happened if the project was popular enough for Project Zero to have looked at it for security issues.
The difference is that Google does use it, though. They use it heavily. All of us in the video industry do - Google, Amazon, Disney, Sony, Viacom, or whoever. Companies you may have never heard of build it into their solutions that are used by big networks and other streaming services, too.
But opening security issues here is not related to that in any way. It's an obscure file format Google definitely doesn't use, the security issue is irrelevant to Google's usages of it.
The critique would make sense if Google was asking for ffmpeg to implement something that Google wanted, instead of sending a patch. But they don't actually care about this one, they aren't actually asking for them to fix it for their benefit, they are sending a notice of a security issue that only affects people who are not Google to ffmpeg.
What in GPL3 or MIT mandates that Google fix this bug and submit a PR or simply sends a bug report and walks away? I don't see how this applies at all.
AGPL, with no CLA that lets the owners relicense. Then we'll see if the using corporation fully believes in open source.
There's a reason Google turned into year 2000 Microsoft "it's viral!" re. the AGPL. They're less able to ignore the intent of the license and lock away their changes.
A bunch of people who make era-defining software for free. A labor of love.
Another bunch of people who make era-defining software where they extract everything they can. From customers, transactionally. From the first bunch, pure extraction (slavery, anyone?).
Exactly. The call-out is not "please stop doing security research". It is, "if you have a lot of money to spend on security research, please spend some of it on discovering the bugs, and some on fixing them (or paying us to fix them), instead of all of it on discovering bugs too fast for us to fix them in time".
You got lower chances of getting hacked by a random file on the internet. At Project Zero level they're also not CVE seeking - it doesn't even matter at that scale, it's not an independent trying to become known.
I have yet to see one on any project I’ve been attached to that was actually exploitable under real circumstances. But the CVE hunting teams treat them all as if they were.
Is this sarcasm? While it may be true that my mother does not know what ffmpeg is I'm almost positive she interacts with stuff that uses it literally every single day.
It'd be a silly license or condition, but, a license that says employees of companies in the S&P500 cant file bugs without an $X contribution, and cant expect a response in under Y days without a larger one, would be a funny way to combat it. Companies have no problem making software non-free or AGPL when it becomes inconvenient so maybe they can put up or shut up.
Where in this bug report was there any expectation for a response? They filed a private but report and have a policy of making private reports public in 90 days whether or not they get a response. How did the OSS world go from "with enough eyes all bugs are shallow" to "filing a bug report is demanding I respond to you"?
I get the idea of publicly disclosing security issues to large well funded companies that need to be incentivized to fix them. But I think open source has a good argument that in terms of risk reward tradeoff, publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days. The security issue should certainly be disclosed - when its responsible to do so.
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
> In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days.
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
> ...then they could very well contribute by submitting a patch along with their issue report.
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.
Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.
Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.
Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.
For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.
Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.
As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.
You are missing the tiny little fact that apparently a large portion of infosec people are of the opinion that insecure software must not exist. At any cost. No shades of gray.
"They could shut down three product lines with an email"
If you (Amazon, in this case) can put it that way, it seems like throwing them 10 or 20 thousand a year would simply be a good insurance policy! Any benefits you might get in goodwill and influence are a bonus.
How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
But on a more serious note, it is crazy that between Google and Amazon they can not fund them with 50k each per year, so that they can pay people to work on this.
Specially Google, with Youtube, they can very easily pay them more. 100k~200k easily.
I'm just imagining them having 4 or 5 250K a year security workers pumping out endless bug reports for one guy in Norway who works on ffmpeg at night and weekends for free because he loves open source and demanding him meet their program deadlines lol
Double funny considering new-grads who may polish up some UI features or rewrite components for the 10th time will get paid 200-400K TC at these same companies. Evidently these companies value something other than straight labor.
If I had built up 500 million in a savings account to buy a yacht, giving 50k of that to FFmpeg devs would put off my ability to buy a yacht by nearly a whole day.
Boltzmann brain-wise it clearly doesn't make sense to wait that long.
> How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
A rising tide lifts all yachts. If he had written the check, my instinct tells me, he would have enough for two yachts. Goodwill is an actual line item on 10Q's and 10K's. I don't know why companies think it's worth ignoring.
I'm not being dismissive. I understand the imperetive of identifying and fixing vulnerabilities. I also understand the detrimental impact that these problems can potentially have on Google.
What I don't understand is the choice to have a public facing project about this. Can anyone shine a light on this?
And pushing forward the idea that "responsible disclosure" doesn't mean the software creator can just sit on a bug for as long as they want and act superior and indignant when the researcher gives up and publishes anyway because the creator is dragging their ass.
Project Zero's public existence came out of the post-Snowden period where Google was publicly pissed at the NSA/etc for spying on them (e.g. by tapping their fiber links).
A lot of their research involves stuff they personally benefit from if they were secure. ffmpeg, libxml2, various kinds of mobile device firmware, Linux kernels and userspace components, you name it.
Their security team gaining experience on other projects can teach them some more diversity in terms of (malware) approaches and vulnerability classes, which can in turn be used to secure their own software better.
For other projects there's some vanity/reputation to be gained. Having some big names with impressive resumes publicly talk about their work can help attract talent.
Lastly, Google got real upset that the NSA spied on them (without their knowledge, they can't help against warrants of course).
Then again, there's probably also some Silicon Valley bullshit money being thrown around. Makes you wonder why they don't invest a little bit more to pay someone to submit a fix.
I would imagine it's mostly a PR/marketing thing. That way the researchers can point to being part of something other people know about, and Google gets positive PR (though maybe not in this case) for spending resources on making software in general more secure.
Unfortunately, nobody not on Twitter can see anything except the first message, which just appears to be some corporate-deflection-speak.
It’s a trillion dollar company. I’m sure they could find rifle through their couch cushions and find more than enough money and under-utilised devs to contribute finance or patches.
Nice to see coming from Google. Some other current and form Google employees were unfortunately not as professional in their interactions on Twitter in regards to this issue.
They should set up a foundation/LLC if they don't have one already and require a support contract for fixing any bugs in niche codecs. Target the 95%-98% use cases for "free" work. If someone gives them a CVE on something ancient, just note it with an issue tracker and that the problem is currently unsponsored. Have default build flags to omit all the ancient/buggy stuff. If nobody is willing to pay to fix all the ancient crap, then nobody should be using it. But if someone is willing to pay, then it gets fixed.
The vulnerability in question is a Use After Free. Google used AI to find this bug, it would've taken them 3 seconds to fix it.
Burning cash to generate spam bug reports to burden volunteer projects when you have the extra cash to burn to just fix the damn issue leaves a very sour taste in my mouth.
If it takes 3 seconds to fix it, then how is this some massive burden on the maintainers? The bug report pointed to the relevant lines, the maintainers are the most familiar with the code and it probably would have taken them 1.5 seconds to not only fix it, but validate the fix. It probably took more time to write up the complaint about the bugs than to implement the fix.
Notably, the vulnerability is also in a part which isn't included by default and nobody uses. I'm not sure that even warrants a CVE? A simple bug report would have probably been fine. If they think this is really a CVE, a bug fix commit would have been warranted.
One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
That's the difference between "it may or may not be that there's someone who cares" versus "no one should be running this software anywhere in the general vicinity of untrusted inputs".
> One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
+100000
My favorite 8.x or higher CVEs are the ones where you would have to take untrusted user input, bypass all the standard ways to ingest and handle that type of data, and pass it into some internal function of a library. And then the end result is that a regex call becomes more expensive.
You’re right about scoring, at least largely. Let’s not conflate the CVE system and the CVSS system, though. They are related but distinct. CVE is just an identifier system.
It is included in most builds of ffmpeg, for example in most Linux packages or in Windows build linked to on ffmpeg.org that I use. But yeah, it's a very niche format that nobody uses.
AIUI there's no such thing as "really a CVE". A CVE is merely a standardized identifier for a bug so you can call it "CVE-2025-XXXXX" rather than "that use-after-free Google found in ffmpeg with AI." It doesn't imply anything else about the bug, except that it may impact security. The Linux kernel assigns one to every bugfix that may impact security (which is most kernel bugs) to avoid controversy about whether they should be assigned.
Use After Free takes 3 seconds to fix if you defer free until the end of the program. If you have to do something else, or you don't want to leak memory, then it probably takes longer than 3 seconds.
Probably the right solution is to disable this codec. You should have to make a choice to compile with it; although if you're running ffmpeg in a context where security matters, you really should be hand picking the enabled codecs anyway.
Yes - more than a sour taste. This is hideous behavior. It is the polar opposite of everything intelligent engineers have understood regarding free-and-open software for decades.
I don't understand the rational for announcing that a vulnerability in project X was discovered before the patch is released. I read the project zero blogspot announcement but it doesn't make much sense to me. Google claims this is help downsteam users but that feels like a largely non-issue to me.
If you announce a vulnerability (unspecified) is found in a project before the patch is released doesn't that just incentivize bad actors to now direct their efforts at finding a vulnerability in that project?
Maybe for a small project? I think the difference here is rather minimal. Everybody "knows" code often has security bugs so this announcement wouldn't technically be new information. For a large project such as ffmpeg, I doubt there is a lack of effort in finding exploits in ffmpeg given how widely it is used.
I don't see why actors would suddenly reallocate large amounts of effort especially since a patch is now known to be coming for the issue that was found and thus the usefulness of the bug (even if found) is rather limited.
The message sounds like a human warning of someone is being fed-up with feeling taken advantage of. They see the profit being made, and they're not even getting drops of it.
If this isn't addressed, it makes this repo a target for actors that don't care
about the welfare of Amazon, Google etc.
It seems quite predictable that someone will see this as a human weakness and try to exploit it, my question is whether we'll get them the support they need before the consequence hits, or whether we'll claim this was a surprise after the fact.
Watch places like Amazon and Google suddenly stop updating and trying to find alternatives.
Like how Apple stopped using up to date the GNU tools in 2008 because of GPL3. That moved showed me then that Apple did not want you to use your computer as your computer.
Well, to continue that timeline. "Big Tech" freezes their version to the last gpl'ed version, and each commences their own (non-trivial effort) to make their own version (assuming the last gpl'ed version was not feature-complete for all their future uses).
And of course, they won't share with each other. So another driver would be fear of a slight competitive disadvantage vs other-big-tech-monstrosity having a better version.
Now, in this scenario, some tech CEO, somewhere has this brilliant bright spark.
"Hey, instead of dumping all these manhours & resources into DIYing it, with no guarantee that we still won't be left behind - why don't we just throw 100k at the original oss project. We'll milk the publicity, and ... we won't have to do the work, and ... my competitors won't be able to use it"
Sometimes it's hard: for many kinds of projects, I don't think anyone would use them if they were not open source (or at least source-available). Just like I wouldn't use a proprietary password manager, and I wouldn't use WhatsApp if I had a choice. Rather I use Signal because it's open source.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
But it can be profited for not-so-big corps, so I'm still working for free.
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
Corporations extract a ton of value from projects like ffmpeg. They can either pay an employee to fix the issues or setup some sort of contract with members of the community to fix bugs or make feature enhancements.
> “The position of the FFmpeg X account is that somehow disclosing vulnerabilities is a bad thing. Google provides more assistance to open source software projects than almost any other organization, and these debates are more likely to drive away potential sponsors than to attract them.”
This position likely to drive away maintainers. Generally the maintainers need these projects less than the big companies that use them. I'm not sure what Google's endgame is
Is it unreasonable to ask that if a massive company funds someone to find a CVE in an open source project, they should also submit a patch? Google is a search company. Seems kind of... evil... to pay your devs to find holes in something with nothing to do with searching, then refuse to pay them to fix the problem they noticed.
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions.
The recent iOS zero-day (CVE-2025-43300) targeted the rarely used DNG image format. How long before this FFMPEG vulnerability is exploited to compromise legacy devices in the wild, I wonder?
I’m not a fan of this grandstanding for arguably questionable funding. (I surely would not fund those who believe these issues are slop.) I’d like to think most contributors already understand the severity and genuinely care about keeping FFMPEG secure.
Bugs in little-used corners of the project are a massive red flag, that's how some of the most serious OpenSSL bugs have emerged. If the code is in there, and someone can trigger it with a crafted input, then it is as bad as any other bug.
Honestly, I kind of think that ffmpeg should just document that it's not secure and that you're expected to run it in a sandbox if you plan to use it on possibly-malicious input. All the big cloud users and browsers are doing this already, so it would hardly even change anything.
ffmpeg is complaining that security bugs are such a drag that it's driving people away from their hobby/passion projects. Well, if fixing security bugs isn't your passion, why not just say that? Say it's not your priority, and if someone else wants it to be a priority, they can write the patches. Problem solved?
There are some rhetorical slights of hand. If Google does the work to find and report vulnerabilities, great. Nice contribution regardless of who provides it. The OSS developer can ignore it by accepting the consequences that will be. They are not forced to fix it except by themselves.
The many large corporations should be funding these tools they depend on to increase time allocations and thus ability to be responsive but this isn't an either/or. These type of thinking erode the communities of such projects and minds of the contributors.
FWIW I've totally been that developer trapped in that perspective so I empathize, there are simply better mental stances available.
I would suggest that FFmpeg spin up a commercial arm that gets support contracts with Google, Amazon, etc, but with a tight leash so that it does not undermine the open source project. Would need clean guidance as to what the commercial arm does and does not.
While I don't think FFmpeg's response is a great one ("fund us or stop submitting bugs"); I think Google is being pretty shitty here. For a company that prides itself in its engineering prowess and contributions to the OSS community (as they like to remind me all the time) to behave this way is just all around shitty.
Submit the bug AND the patch and be done with it; don't make it someone else's problem when it's an OSS library/tool. A for-profit vendor? Absolutely. But this? Hell naw.
In general it is not good when companies get too much power. See how shopify is eating away the ruby infrastructure right now after RubyCentral panicked when shopify blackmailed the community by taking away funding. Of course it is their money, but the whole ecosystem suddenly submitting to one huge company, really really sucks to no ends. Hobbyists are at the weakest here.
If it just were that simple.
The reality is that this is a very slippery slope and you won’t get a support contract just like that with a “tight leash”
I understand ffmpeg being angry at the workload but this is how it is with large open source projects. Ffmpeg has no obligation to fix any of this. Open source is a gift and is provided as is. If Google demanded a fix I could see this being an issue. As it is right now it just seems like a bad look. If they wanted compensation then they should change the model, there's nothing wrong with that. Google found a bug, they reported it. If it's a valid bug then it's a valid bug end of story. Software owes it to its users to be secure, but again it's up to the maintainers if they also believe that. Maybe this pushes Google to make an alternative, which I'd be excited for.
It doesn’t matter if it affects their business or not. They found an issue and they reported it. Ffmpeg could request that they report it privately perhaps. Google has a moral duty to report the bug.
Software should be correct and secure. Of course this can’t always be the case but it’s what we should strive for. I think that’s baseline
> ffmpeg owes me nothing. I haven't paid them a dime.
That is true. At the same time Google also does not owe the ffmpeg devs anything either. It applies both ways. The whole "pay us or we won't fix this" makes no sense.
> Google also does not owe the ffmpeg devs anything either.
Then they can stop reporting bugs with their assinine one size fits all "policy." It's unwelcome and unnecessary.
> It applies both ways.
The difference is I do not presume things upon the ffmpeg developers. I just use their software.
> The whole "pay us or we won't fix this" makes no sense.
Pay us or stop reporting obscure bugs in unused codecs found using "AI" scanning, or at least, if you do, then change your disclosure policy for those "bugs." That's the actual argument and is far more reasonable.
> That does not impact their business or their operations in any way whatsoever.
I don't know what tools and backends they use exactly, but working purely by statistics, I'm sure some place in Google's massive cloud compute empire is relying on ffmpeg to process data from the internet.
I read this as nobody wants CVEs open on their product, so you might feel forced to fix them. I find it more understandable if we talk about web frameworks: Wordpress don't want security CVEs open for months or years, or users would be upset they introduce new features while neglecting safety.
I am a nobody, and whenever I found a bug I work extra to attach a fix in the same issue. Google should do the same.
I don't consider a security issue to be a "standard bug." I need to look at it, and [maybe] fix it, regardless of who reported it.
But in my projects, I have gotten requests (sometimes, demands) that I change things like the published API (a general-purpose API), to optimize some niche functionality for one user.
I'll usually politely decline these, and respond with an explanation as to why, along with suggestions for them to add it, after the fact.
It’s a security issue for a stream type almost nobody uses. It’s a little like saying your graphics program in 2025 is exploitable by a malformed PCX file, or your music player has a security bug only when playing an Impulse Tracker module.
Sure, triage it. It shouldn’t be publicly disclosed within a week of the report though, because the fix is still a relatively low priority.
Security is adversarial. It doesn't matter whether the users intentionally use the vulnerable codec. What matters is whether an adversary can make the users to use it. Given the codec is compiled in by default on Ubuntu, and given that IIUC the bug would already be triggered by ffmpeg's file format probing, it seems pretty likely that the answer to that is yes.
Yes, security is by definition adversarial. Thanks for the most basic lesson.
How are you getting ffmpeg to process a stream or file type different from the one you’re expecting? Most use cases of ffmpeg are against known input and known output types. If you’re just stuffing user-supplied files through your tools, then yes you have a different threat model.
> How are you getting ffmpeg to process a stream or file type different from the one you’re expecting?
... That is how ffmpeg works? With default settings it auto-detects the input codec from the bitstream, and the output codec from the extension. You have to go out of your way to force the input codec and disable the auto-detection, and I don't think most software using ffmpeg as a backend would force the user to manually do it, because users can't be trusted to know those details.
If no one uses the stream type, then not fixing the bug won't hurt.
The people who do use the stream type are at risk, and have been at risk all along. They need to stop using the stream type, or get the bug fixed, or triage the but as not exploitable.
People volunteer to make billionaires even more profit - crazy world. Who even have the time and resources to volunteer so much??? I don't get all this at all.
FFmpeg should just dual license at this point. If you're wanting shit fixed. You pay for it (based on usage) or GTFO. Should solve all of the current issues around this.
You mean, Google reports a bug, and ffmpeg devs say "GTFO"? Let's assume this is a real bug: is that what you would the ffmpeg developers to say to Google?
I absolutely understand the issue that a filthy-rich company tries to leech off of real unpaid humans. I don't understand how that issue leads to "GTFO, we won't fix these bugs". That makes no sense to me.
And maybe it's fine to have AI-generated articles that summarize Twitter threads for HN, but this is not a good summarization of the discussion that unfolded in the wake of this complaint. For one, it doesn't mention a reply from Google security, which you would think should be pretty relevant here.
It's of excellent quality. They've made things about as easy for the FFmpeg devs as possible without actually fixing the bug, which might entail legal complications they want to avoid.
Open source projects want people to use their work. Google wants bugs -- especially security ones -- found and fixed fast. Both goals make sense. The tension starts when open source developers expect payment for fixes, and corporations like Google expect fixes for free.
Paying for bug fixes sounds fair, but it risks turning incentives upside down. If security reports start paying the bills, some maintainers might quietly hope for more vulnerabilities to patch. That's a dangerous feedback loop.
On the other hand, Google funding open source directly isn't automatically better. Money always comes with strings. Funding lets Google nudge project priorities, intentionally or not -- and suddenly the "open" ecosystem starts bending toward corporate interests.
There's no neat solution. Software wants to be used. Bugs want to be found and fixed. But good faith and shared responsibility are supposed to be the glue that holds the open source world together.
Maybe the simplest fix right now is cultural, not technical: fewer arguments on Twitter, more collaboration, and more gratitude. If you rely on open source, donate to the maintainers who make your life easier. The ecosystem stays healthy when we feed it, not when we fight over it.
Amusing. I suppose the intended optional behavior is for Google to fix internally then run the public PR. Less optimal for us normal users since the security issue will be visible publicly in the PR until merging, though it won't affect Google (who will carry the fixed code before disclosure).
Forking puts you in another hell as Google. Now you have to pay someone to maintain your fork! Maybe for a project that’s well and fully complete that’s OK. But something like FFmpeg is gonna get updated all the time, as the specs for video codecs are tweaked or released.
Their choice becomes to:
- maintain a complex fork, constantly integrating from upstream.
- Or pin to some old version and maybe go through a Herculean effort to rebase when something they truly must have merges upstream.
- Or genuinely fork it and employ an expert in this highly specific domain to write what will often end up being parallel features and security patches to mainline FFmpeg.
Or, of course, pay someone in doing OSS to fix it in mainline. Which is the beauty of open source; that’s genuinely the least painful option, and also happens to be the one that benefits the community the most.
Any time I have tried to fix a bug in an open source project I was immediately struck down with abusive attitudes about how I didn't do something exactly the way they wanted it that isn't really documented.
If that's what I have to expect, I'd rather not even interact with them at all.
I don't think this is what typically happens. Many of my bug reports were handled.
For instance, I reported to the xorg-bug tracker that one app behaved oddly when I did --version on it. I was batch-reporting all xorg-applications via a ruby script.
Alan Coopersmith, the elderly hero that he is, fixed this not long after my report. (It was a real bug; granted, a super-small one, but still.)
I could give many more examples here. (I don't remember the exact date but I think I reported this within the last 3 years or so. Unfortunately reporting bugs in xorg-apps is ... a bit archaic. I also stopped reporting bugs to KDE because I hate bugzilla. Github issues spoiled me, they are so easy and convenient to use.)
I feel you, and that's a different issue than the one in this thread, which is in general if maintainers ignore bug reports, their projects will be forked and the issue fixed anyway but not in the main project.
Is there really slop here though? It sounds like the specific case identified was a real use after free in an obscure file format but which is enabled by default.
If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
This is dumb. Obscurity doesn’t create security. It’s unfortunate if ffmpeg doesn’t have the money to fix reported bugs but that doesn’t mean they should be ignorant of them. I don’t see any entitlement out of Google either - I expected this article would have a GH issue thread with a whiny YouTube engineer yelling at maintainers.
The first thing you can do is actually read the article. The question is not about the security reports but Google's policy on disclosing the vulnerability after x days. It works for crazy lazy corps. But not for OSS projects.
In practice, it doesn't matter all that much whether the software project containing the vulnerability has the resources to fix it: if a vulnerability is left in the software, undisclosed to the public, the impact to the users is all the same.
I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence. Certainly, ffmpeg developers do not owe security to their users, but security researchers consider that they have a duty to disclose them, even if they go unfixed (and I think most people would prefer to know an unfixed vulnerability exists than to get hit by a 0-day attack). There's gotta be a point where you disclose a vulnerability, the deadline can never be indefinite, otherwise you're just very likely allowing 0-day attacks to occur (in fact, I would think that if this whole thing never happened and we instead got headlines in a year saying "GOOGLE SAT ON CRITICAL VULNERABILITY INVOLVED IN MASSIVE HACK" people would consider what Google did to be far worse).
To be clear, I do in fact think it would be very much best if Google were to use a few millionths of a percent of their revenue to fund ffmpeg, or at least make patches for vulnerabilities. But regardless of how much you criticize the lack of patches accompanying vulnerability reports, I would find it much worse if Google were to instead not report or disclose the vulnerability at all, even if they did so at the request of developers saying they lacked resources to fix vulnerabilities.
Agreed that obscurity is not security. However we don't want to make it easy for hackers to get a catalog of vulnerabilities to pick and choose from. I think the issue is public disclosure of vulnerabilities after a deadline. The hobbyists can't just keep up.
Google leveraging AI to spam ffmpeg devs with bugs that range from real to obscure to wrongly reported may be annoying. But even then I still don't think Google is to be held accountable for reporting bugs nor is it required to fix bugs. Note: I do think Google should help pay for costs and what not. If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that. Even then they are not responsible for bugs in ffmpeg. And IF the bug report is valid, then I also see no problem.
The article also confuses things. For instance:
"Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers"
How could Google do that? It is not Google's decision. That is up to volunteers. If they refuse to fix bug reports reported from Google then this is fine. But it is THEIR decision, not Google.
"With this policy change, GPZ announces that it has reported an issue on a specific project within a week of discovery, and the security standard 90-day disclosure clock then starts, regardless of whether a patch is available or not."
Well, many opinions here. I think ALL bugs and exploits should be INSTANTLY AND WITHOUT ANY DELAY, be made fully transparent and public. I understand the other side of the medal too, bla bla we need time to fix it bla bla. I totally understand it. Even then I believe the only truthful, honest way to deal with this, is 100% transparency at all times. This includes when there are negative side effects too, such as open holes. I believe in transparency, not in secrecy. There can not be any compromise here IMO.
"Many volunteer open source program maintainers and developers feel this is massively unfair to put them under such pressure when Google has billions to address the problem."
So what? Google reports issues. You can either fix that or not. Either way is a strategy. It is not Google's fault when software can be exploited, unless they wrote the code. Conversely, the bug or flaw would still exist in the code EVEN IF GOOGLE WOULD NOT REPORT IT. So I don't understand this part. I totally understand the issue of Google being greedy, but this here is not solely about Google's greed. This is also how a project deals with (real) issues (if they are not real then you can ask Google why they send out so much spam).
That Google abuses AI to spam down real human beings is evil and shabby. I am all for ending Google on this planet - it does so much evil. But either it is a bug, or not. I don't understand the opinion of ffmpeg devs "because it is Google, we want zero bug reports". That just makes no sense.
"The fundamental problem remains that the FFmpeg team lacks the financial and developer resources to address a flood of AI-created CVEs."
Well, that is more an issue in how to handle Google spamming down people. Sue them in court so that they stop spamming. But if it is a legit bug report, why is that a problem? Are ffmpeg devs concerned about the code quality being bad? If it is about money then even though I think all of Google's assets should be seized and the CEOs that have done so much evil in the last 20 years be put to court, it really is not their responsibility to fix anything written by others. That's just not how software engineering works; it makes no sense. It seems people confuse ethics with responsibilities here. The GPL doesn't mandate code fixing to be done; it mandates that if you publish a derivative etc... of the code, that code has to be published under the same licence and made available to people. That's about it, give or take. It doesn't say corporations or anyone else HAS to fix something.
"On the other hand, security experts are certainly right in thinking that FFmpeg is a critical part of the Internet’s technology framework and that security issues do need to be made public responsibly and addressed."
I am all for that too, but even stricter - all security issues are to be made public instantly, without delay, fully and completely. I went to open source because I got tired of Microsoft. Why would I want to go back to evil? Not being transparent here is no valid excuse IMO.
"The reality is, however, that without more support from the trillion-dollar companies that profit from open source, many woefully underfunded, volunteer-driven critical open-source projects will no longer be maintained at all."
Wait - so it is Google's fault if projects die due to lack of funding? How does that explanation work?
You can choose another licence model. Many choose BSD/MIT. Others choose GPL. And so forth.
"For example, Wellnhofer has said he will no longer maintain libxml2 in December. Libxml2 is a critical library in all web browsers, web servers, LibreOffice and numerous Linux packages. We don’t need any more arguments; we need real support for critical open source programs before we have another major security breach."
Yes, that is a problem - the funding part. I completely agree. I still don't understand the "logic" of trying to force corporations to have to do so when they are not obliged. If you don't want corporations to use your code, specify that in the licence. The GPL does not do that. I am confused about this "debate" because it makes no real sense to me from an objective point of view. The only part that I can understand pisses off real humans is when Google uses AI as a pester-spam attack orgy. Hopefully a court agrees and spits up any company with more than 100 developers into smaller entities the moment they use AI to spam real human beings.
Its a special kind of irony to post AI slop complaining about someone's ai slop that isn't actually ai slop just devs whining about being expected to maintain their code instead of being able to extort the messengers to do the work for them.
“How dare ffmpeg be so arrogant!
Don’t they know who we are?
Fork ffmpeg and kill the project! I grant a budget of 30 million to crush this dissent! Open source projects must know who’s boss! I’ll stomp em like a union!”
…. overheard at a meeting of CEO and CTO at generic evil mega tech corp recently.
They obviously need to be reminded that the only reason Google has to care about FLOSS projects is when they can effectively use them to create an advertising panopticon under the company's complete control.
They probably want to drown you in CVEs to force deprecation on the world and everybody into their system like they do with everything else they touch.
Just mark CVEs as bugs and get to them when you can. In this case, if Google doesn't like it, then so be it. It'll get fixed eventually. Don't like how long it takes? Pay someone to contribute back. Until then, hurry up and wait.
Please bro, please, fix our bugs bro, just this one bug bro, last one I swear, you and I will make big money, you are the best bro, I love you bro.
-- big tech companies
Google might be aiming to replace ffmpeg as the world's best media professor. Remember how Jia Tan (under different names) flooded xz with work before stepping up as a maintainer.
Google, through YouTube and YouTube TV, already runs one of the most significant video processing lines of business in the world. If they had any interest in supplanting FFmpeg with their own software stack, they wouldn't need to muck around with CVEs to do so.
FFmpeg should stop fixing security bugs reported by Google, MS, Amazon, Meta etc. and instead wait for security patches from them. If FFmpeg maintainers will leave it exposed, those companies will rush to fixing it, because they'd be screwed otherwise. Every single one of them is dependent on FFmpeg exactly as shown in https://xkcd.com/2347/
Because they are making more money in profit than some mid-sized American cities' economies do in a year while contributing nothing back. If they don't want massive security vulnerabilities in their services using FFmpeg, maybe they need to pony up about .1 seconds' worth of their quarterly earnings to the project either in cash or in manpower.
It's not FFmpeg's problem if someone uses a vulnerability to pwn YouTube, it's Google's problem.
Also, in the article, they mention that Google's using AI to look for bugs and report them, and one of them that it found was a problem in the code that handles the rendering of a few frames of a game from 1995. That sort of slop isn't helping anyone. It's throwing the signal-to-noise ratio of the bug filings way the hell off.
> Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers.
That's capitalism, they need to quit their whining or move to North Korea. /s The whole point is to maximize value to the shareholders, and the more work they can shove onto unpaid volunteers, the move money they can shove into stock buybacks or dividends.
The system is broken. IMHO, there outta be a law mandating reasonable payments from multi-billion dollar companies to open source software maintainers.
Not too fond of maintainers getting too uppity about this stuff. I get that it can be frustrating to receive bug report after bug report from people who are unwilling or unable to contribute to the code base, or at the very least to donate to the team.
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
When you already work 40+ hours a week and big companies suddenly start an AI snowblower that shoots a dozen extra hours of work every week at you without doing anything to balance that (like, for instance, also opening PRs with patches that fix the bugs), the relationship starts feeling like being an unpaid employee of their project.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
The problem with security reports in general is security people are rampant self-promoters. (Linus once called them something worse.)
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Except that the only people publicizing this bug were the people running the ffmpeg Twitter account. Without them it would have been one of thousands of vulnerabilities reported with no fanfare, no logos, and no conference talks.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
How do they know that next week it's not going to be one of those 10 page Project Zero blog posts? (Which like all Google engineer blog posts, usually end up mostly being about how smart the person who wrote the blog post is.)
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
> it can be frustrating to receive bug report after bug report from people
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
That is completely irrelevant, the gross part is that (if true) they are demanding them to be fixed in a given time. Sounds like the epitome of entitlement to me, to say the least.
No one is demanding anything, the report itself is a 90 day grace period before being publicly published. If the issues are slop then what exactly is your complaint?
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions. It can be leveraged in an exploit chain to compromise a system.
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
You could argue that, but I think that a bug is the software failing to do what it was specified, or what it promised to do. If security wasn't promised, it's not a bug.
Which is exactly the case here. This CVE is for a hobby codec written to support digital preservation of a some obscure video files from the 90’s that are used nowhere else. No security was promised.
They are not published in project bug trackers and are managed completely differently so no, personally, I don't view CVE as bug reports. Also, please, don't distrort what I say and omit part of my comment, thank you.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
It seems like you might misunderstand what CVEs are? They're just identifiers.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
I'm not misunderstanding anything. CVE involves a third party and it's not just a number. It's a number and an evaluation of severity.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.
Apparently, Tavis Ormand made something of a death threat against ffmpeg.
"Let's hope a bug in some 1990s game codec doesn't get some ffmpeg core developer popped." https://x.com/FFmpeg/status/1985335035818369164 I can't find the original, I think he either deleted it or it got reported.
Maybe I'm crazy, but I think some security researchers really do believe their idea of security trumps everything, including the lives of maintainers.
From TFA this was telling:
Thus, as Mark Atwood, an open source policy expert, pointed out on Twitter, he had to keep telling Amazon to not do things that would mess up FFmpeg because, he had to keep explaining to his bosses that “They are not a vendor, there is no NDA, we have no leverage, your VP has refused to help fund them, and they could kill three major product lines tomorrow with an email. So, stop, and listen to me … ”
I agree with the headline here. If Google can pay someone to find bugs, they can pay someone to fix them. How many time have managers said "Don't come to me with problems, come with solutions"
I've been a proponent of upstreaming fixes for open source software.
Why? - It makes continued downstream consumption easier, you don't have to rely on fragile secret patches. - It gives back to projects that helped you to begin with, it's a simple form of paying it forward. - It all around seems like the "ethical" and "correct" thing to do.
Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I have a very distinct recollection of talks about hardware aspirations and upstreaming software fixes at a large company. The cultural response was jarring.
As yet, Valve is the only company I know of doing this, and it's paying off in dividends both for Linux and for Valve. In just 5ish years of Valve investing people and money into Linux- specifically mesa and WINE, Linux has gone from a product that is kind of shaky with Windows, to "I can throw a windows program or game at it and it usually works". Imagine how further the OSS ecosystem would be if Open Source hadn't existed, only FOSS; and companies were legally obligated to either publish source code or otherwise invest in the ecosystem.
Credit to wine and cross over (?)for years and years of work as well
> Valve is the only company I know of [upstreaming fixes for open source software]
Sorry, that's ridiculous. Basically every major free software dependency of every major platform or application is maintained by people on the payroll of one or another tech giant (edit: or an entity like LF or Linaro funded by the giants, or in a smaller handful of cases a foundation like the FSF with reasonably deep industry funding). Some are better than others, sure. Most should probably be doing more. FFMpeg in particular is a project that hasn't had a lot of love from platform vendors (most of whom really don't care about software codecs or legacy formats anymore), and that's surely a sore point.
But to pretend that SteamOS is the only project working with upstreams is just laughable.
From my time working at a Fortune 100 company, if I ever mentioned pushing even small patches to libraries we effing used, I'd just be met "try to focus on your tickets". Their OSS library and policies were also super byzantine, seemingly needing review of everything you'd release, but the few times I tried to do it the official way, I just never heard anything back from the black-hole mailing list you were supposed to contact.
Yes, I've also worked on OpenStack components at a university, and there I see Red Hat or IBM employees pushing up loads of changes. I don't know if I've ever seen a Walmart, UnitedHealth, Chase Bank, or Exxon Mobil (to pick some of the largest companies) email address push changes.
Sure, but the parent’s comment hits on something perhaps. All the tech giants contribute more haphazardly and for their own internal uses.
Valve does seem somewhat rare position of making a proper Linux distro work well with games. Google’s Chromebooks don’t contribute to the linux ecosystem in the same holistic fashion it seems.
I upstreamed a 1-line fix, plus tests, at my previous company. I had to go through a multi-month process of red tape and legal reviews to make it happen. That was a discouraging experience to say the least.
My favorite is when while you were working through all that, the upstream decided they need a CLA. And then you have to go through another round of checking to see if your company thinks it's ok for you to agree to sign that for a 1 line change.
Certainly easier to give a good bug report and let upstream write the change, if they will.
In this scenario does your employer have strong controls around what whether you can write hobby code on your own time?
Generally yes. Or yes, you could just do it yourself in your free time.
I've literally had my employer's attorneys tell me I can't upstream patches because it would put my employer's name on the project, and they don't want the liability.
No, it didn't help giving them copies of licenses that have the usual liability clauses.
It seems a lot of corporate lawyers fundamentally misunderstand open source.
Corporate counsel will usually say no to anything unusual because there's no personal upside for them to say yes. If you escalate over their heads with a clear business case then you can often get a senior executive to overrule the attorneys and maybe even change the company policy going forward. But this is a huge amount of extra unpaid work, and potentially politically risky if you don't have a sold management chain.
Sounds like your employers attorneys need to be brought to heel by management. Like most things, this is a problem of management not understanding that details matter.
Maybe I've just gotten lucky, but at companies I've worked for I've usually gotten the go-ahead to contribute upstream on open source projects so as long as it's something important for what we are working on. The only reason I didn't do a whole lot as part of my work at Google is because most of the open source projects I contributed to at Google were Google projects that I could contribute to from the google3 side, and that doesn't count.
It would probably be easier for these companies to pay Collabora or Igalia.
How could ffmpeg maintainers kill three major AWS product lines with an email?
In a follow-up tweet, Mark Atwood eloborates: "Amazon was very carefully complying with the licenses on FFmpeg. One of my jobs there was to make sure the company was doing so. Continuing to make sure the company was was often the reason I was having a meeting like that inside the company."
I interpret this as meaning there was an implied "if you screw this up" at the end of "they could kill three major product lines with an email."
Are you interpreting that as "if we violate the license, they can revoke our right to use the software" ?? And they use it in 3 products so that would be really bad. That would make sense to have a compliance person.
Possibly Twitch, Amazon Prime Video, and another one that escapes my mind (AWS-related?).
AWS for sure (Elemental maybe?), but could also be Ring.
Easy: ffmpeg discontinues or relicenses some ffmpeg functionality that AWS depends on for those product alines and AWS is screwed. I've seen that happen in other open source projects.
But if it gets relicensed, they would still be able to use the current version. Amazon definitely would be able to fund an independent fork.
And then the argument for refusing to just pay ffmpeg developers gets even more flimsy.
The entire point here is to pay for the fixes/features you keep demanding, else the project is just going to do as it desires and ignore you.
More and more OSS projects are getting to this point as large enterprises (especially in the SaaS/PaaS spheres) continue to take advantage of those projects and treat them like unpaid workers.
Not really. Their whole reason for not funding open source is it essentially funds their competitors who use the same projects. That's why they'd rather build a closed fork in-house than just hand money to ffmpeg.
It's a dumb reason, especially when there are CVE bugs like this one, but that's how executives think.
Sounds like it would be a lot of churn for nothing; if they can fund a fork, then they could fund the original project, no?
They COULD, but history has shown they would rather start and maintain their own fork.
It might not make sense morally, but it makes total sense from a business perspective… if they are going to pay for the development, they are going to want to maintain control.
If they want that level of control, reimburse for all the prior development too. - ie: buy that business.
As it stands, they're just abusing someone's gift.
Like jerks.
I always like to point out that "Open Source" was a deliberate watering-down of the moralizing messaging of Free Software to try and sell businesses on the benefits of developing software in the open.
> We realized it was time to dump the confrontational attitude that has been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds that motivated Netscape.
https://web.archive.org/web/20021001164015/http://www.openso...
There should be a "if you use this product in a for-profit environment, and you have a yearly revenue of $500,000,000,000+ ... you can afford to pay X * 100,000/yr" license.
There is also the AGPL.
That's the Llama license and yeah, a lot of people prefer this approach, but many don't consider it open source. I don't either.
In fact, we are probably just really lucky that some early programmers were kooky believers in the free software philosophy. Thank God for them. So much of what I do owes to the resulting ecosystem that was built back then.
Do they want control or do they really want something that works that they don't have to worry about?
The only reason for needing control would be if it was part of their secret sauce and at that point they can fork it and fuck off.
These companies should be heavily shamed for leaching off the goodwill of the OSS community.
If they can fund a fork, they can continue business as usual until the need arises
A fork is more expensive to maintain than funding/contributing to the original project. You have to duplicate all future work yourselves, third party code starts expecting their version instead of your version, etc.
With a bit of needless work the fixes could be copied and they would still end up funding them.
something more dangerous would be "amazon is already breaking the license, but the maintainers for now havent put in the work to stop the infringement"
It still takes expensive humans to do this so they are incentivized to use the free labor.
Yes, definitely. I was just saying that if the license ever did change, they would move to an in-house library. In fact, they would probably release the library for consumer use as an AWS product.
ffmpeg cannot relicense anything because it doesn't own anything. The contributors own the license to their code.
Relicensing isn't necessary. If you violate the GPL with respect to a work you automatically lose your license to that work.
It's enough if one or two main contributors assert their copyrights. Their contributions are so tangled with everything else after years of development that it can't meaningfully be separated away.
I don’t know about ffmpeg, but plenty of OSS projects have outlined rules for who/when a project-wide/administrative decision can be made. It’s usually outlined in a CONTRIB or similar file.
Doubtful that's enough for a copyright grant. You'd need a signed CLA.
Wouldn’t that only affect new versions and current versions are still licensed under the old license ?
Open up an Amazon media app and navigate around enough, and you'll encounter a page with all their "Third Party Software Licenses."
For instance, here's one for the Amazon Music apps, which includes an FFMpeg license: https://www.amazon.com/gp/help/customer/display.html?nodeId=...
I'd guess Prime Video heavily relies on ffmpeg, then you got Elastic Transcode and the Elemental Video Services. Probably Cloudfront also has special things for streaming that rely on ffmpeg.
The "kill it with an email" probably means that whoever said this is afraid that some usecase there wouldn't stand up to an audit by the usual patent troll mothercluckers. The patents surrounding video are so complex, old and plentiful that I'd assume full compliance is outright impossible.
AWS MediaConvert as well which is a huge API (in surface it covers) which is under Elemental but is kinda it's own thing - willing to bet (though I don't know) that that is ffmpeg somewhere underneath.
The API manual for it is nearly 4000 pages and it can do insane stuff[1].
I had to use it at last job(TM), it's not terrible API wise.
[1] https://docs.aws.amazon.com/pdfs/mediaconvert/latest/apirefe... CAUTION: big PDF.
> "Don't come to me with problems, come with solutions"
The problem is, the issue in the article is explicitly named as "CVE slop", so if the patch is of the same quality, it might require quite some work anyway.
The linked report seems to me to be the furthest thing from "slop". It is an S-tier bug report that includes a complete narrative, crash artifacts, and detailed repro instructions. I can't believe anyone is complaining about what is tied for the best bug report I have ever seen. https://issuetracker.google.com/issues/440183164?pli=1
It's a good quality bug report.
But it's also a bug report about the decoder for "SANM ANIM v0" - a format so obscure almost all the search results are the bug report itself. Possibly a format exclusive to mid-1990s LucasArts games [1]
Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
I can understand volunteers not wanting to sink time into maintaining a codec to play a video format that hasn't been used since the Clinton administration. gstreamer divides their plugins into 'good', 'bad' and 'ugly' to give them somewhere to stash unmaintained codecs.
[1] https://web.archive.org/web/20250419105551/https://wiki.mult...
It's a codec that is enabled by default at least on major Linux distributions, and that will be processed by ffmpeg without any extra flags. Anyone playing an untrusted video file without explictly overriding the codec autodetection is vulnerable.
The format being obscure and having no real usage doesn't help when it's the attackers creating the files. The obscure formats are exposing just as much attack surface as the common ones.
> Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
Yes.
Sure, it's a valid bug report. But I don't understand why there has been so much drama over this when all the ffmpeg folks have to do is say "sorry, this isn't a priority for us so we'll get to it as soon as we can" and put the issue in the backlog as a low priority. If Google wants the issue fixed faster, they can submit a fix. If they don't care enough to do that, they can wait. No big deal either way. Instead, ffmpeg is getting into a public tiff with them over what seems to be a very easily handled issue.
Yes, you're very right. They could simply have killed a codec that no one uses anymore. Or put it behind a compile flag, so if you really want, you can still enable it
But no. Intentionally or not, there was a whole drama created around it [1], with folks being blamed [2] for saying exactly what you said above, only because their past (!) employers.
Instead of using the situation to highlight the need for more corporate funding for opensource projects in general, it became a public s**storm, and developers questioning their future contributions.
[1] https://news.ycombinator.com/item?id=45806269
[2] https://x.com/FFmpeg/status/1985334445357051931
FFMPEG is upset because Google made the exploit public. They preferred that it remained a zero-day until they decided it was a priority.
I don't understand how anyone believes that behavior is acceptable.
Ffmpeg makes it trivial to enable and disable individual codecs at compile time. Perhaps it's the Linux distros that need to make a change here?
I get that the ffmpeg people have limited time and resources, I get that it would be nice if Google (or literally anyone else) patched this themselves and submitted that upstream. But "everyone else down stream of us should compile out our security hole" is a terrible way to go about things. If this is so obscure of a bug that there's no real risk, then there's no need for anyone to worry that the bug has been reported and will be publicized. On the other hand, if it's so dangerous that everyone should be rebuilding ffmpeg from source and compiling it out, then it really needs to be fixed in the up stream.
Edit: And also, how is anyone supposed to know they should compile the codec our unless someone makes a bug report and makes it public in the first place?
Every change breaks somebody's workflow.
There are dozens if not hundreds of issues just like this one in ffmpeg, except for codecs that are infinitely more common. Google has been running all sorts of fuzzers against ffmpeg for over a decade at this point and it just never ends. It's a 20 year old C project maintained by poorly funded volunteers that mostly gives every media file ever the be-liberal-in-what-you-accept treatment, because people complain if it doesn't decode some bizarrely non-standard MPEG4 variant recorded with some Chinese plastic toy from 2008. Of course it has all of the out-of-bounds bugs. I poked around on the issue tracker for like 5 minutes and found several "high impact" issues similar to the one in TFA just from the last two or three months, including at least one that hasn't passed the 90 day disclosure window yet.
Nobody who takes security even remotely seriously should decode untrusted media files outside of a sandboxed environment. Modern media formats are in themselves so complex one starts wondering if they're actually Turing complete, and in ffmpeg the attack surface is effectively infinitely large.
The issue is CVE slop because it just doesn't matter if you consider the big picture.
Some example issues to illustrate my point:
https://issuetracker.google.com/issues/436511754 https://issuetracker.google.com/issues/445394503 https://issuetracker.google.com/issues/436510316 https://issuetracker.google.com/issues/433502298
I don't get why you think linking to multiple legitimate and high quality bug reports with detailed analysis and precise reproduction instructions demonstrates "slop". It is the opposite.
This is software that is directly or indirectly run by millions of people on untrusted media files without sandboxing. It's not even that they don't care about security, it's that they're unaware that they should care. It should go without saying that they don't deserve to be hacked just because of that. Big companies doing tons of engineering work to add defense in depth for use cases on their own infrastructure (via sandboxing or disabling obsolete codecs) doesn't help those users. Finding and fixing the vulnerabilities does.
All of these reports are effectively autogenerated by Big Sleep from fuzzing.
Again, Google has been doing this sort of thing for over a decade and has found untold thousands of vulnerabilities like this one. It is not at all clear to me that their doing so has been all that valuable.
Best response would be to drop this codec entirely, or have it off by default. At least distros should do that.
Yeah but as you can see from the bug report ffmpeg automatically triggers the codec based on file magic, so it is possible that if you run some kind of network service or anything that handles hostile data an attacker could trigger the bug.
Google is not paying anyone to find bugs. They are running AIs indiscriminately.
https://en.wikipedia.org/wiki/Project_Zero
Someone is making the tools to find these bugs. It's not like they're telling ChatGPT "go find bugs lol"
And running those models on large codebases like these isnt anywhere close to free either.
Someone started it running, they are responsible for the results.
Does it matter? Either it's a valid bug or it's not. Either it's of high importance or it's not.
Still, they are paying for the computing resources needed to run the AI/agents etc.
They certainly paid someone to run the so-called AIs.
So Chromium is going to drop video support because ffmpeg is full of bugs, just like libxlst ?
/s
I’m an open source maintainer, so I empathize with the sentiment that large companies appear to produce labor for unpaid maintainers by disclosing security issues. But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it, or would otherwise need to accept the reputational hit that comes with not triaging security reports. That’s sometimes perfectly fine (it’s okay for projects to decide that security isn’t a priority!), but you can’t have it both ways.
If google bears no role in fixing the issues it finds and nobody else is being paid to do it either, it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg.
I don’t think vulnerability researchers are having trouble finding exploitable bugs in FFmpeg, so I don’t know how much this actually holds. Much of the cost center of vulnerability research is weaponization and making an exploit reliable against a specific set of targets.
(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)
So your claim is that buggy software is better than documented buggy software?
I think so, yes. Certainly it's more effort to both find and exploit a bug than to simply exploit an existing one someone else found for you.
Yeah it's more effort, but I'd argue that security through obscurity is a super naive approach. I'm not on Google's side here, but so much infrastructure is "secured" by gatekeeping knowledge.
I don't think you should try to invoke the idea of naivete when you fail to address the unhappy but perfectly simple reality that the ideal option doesn't exist, is a fantasy that isn't actually available, and among the available options, even though none are good, one is worse than another.
"obscurity isn't security" is true enough, as far as it goes, but is just not that far.
And "put the bugs that won't be fixed soon on a billboard" is worse.
The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.
If I know it's a bug and I use ffmpeg, I can avoid it by disabling the affected codec. That's pretty valuable.
More fantasy. Presumes the bug only exists in some part of ffmpeg that can be disabled at all, and that you don't need, and that you are even in control over your use of ffmpeg in the first place.
Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.
The bug in question revolves around support for codec that has never been in wide use, and was only in obscure use over 25 years ago.
The bug exists whether it's reported to the maintainers or not, so yeah, it's pretty naive.
it’s not a claim it’s common sense that’s why we have notice periods
I guess the question that a person at Google who discovers a bug they don’t personally have time to fix is, should they report the bug at all? They don’t necessarily know if someone else will be able to pick it up. So the current “always report” rule makes sense since you don’t have to figure out if someone can fix it.
The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.
In this case the bug was so obscure it’s kind of silly.
My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it? I mean someone could already be using the vulnerability regardless of what Google does.
>Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Not being told the existence of bugs is different from having a warranty on software. How would you submit a patch on a bug you were not aware of?
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Google publicly disclosing the bug doesn't only let affected users know. It also lets attackers know how they can exploit the software.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
How is having a disclosure policy so that you balance the tradeoffs between informing people and leaving a bug unreported "holding" anything over the heads of the maintainers? They could just file public bug reports from the beginning. There's no requirement that they file non-public reports first, and certainly not everyone who does file a bug report is going to do so privately. If this is such a minuscule bug, then whether it's public or not doesn't matter. And if it's not a minuscule bug, then certainly giving some private period, but then also making a public disclosure is the only responsible thing to do.
All the license means is that I can’t sue them. It doesn’t mean I have to like it.
Just because software makes no guarantees about being safe doesn’t mean I want it to be unsafe.
If the software makes no guarantees about being safe, then you should assume it is unsafe.
Have you ever used a piece of software that DID make guarantees about being safe?
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
I don't know what our contract terms were for security issues, but I've certainly worked on a product where we had 5 figure penalties for any processing errors or any failures of our system to perform its actions by certain times of day. You can absolutely have these things in a contract if you pay for it, and mass market software that you pay for likely also has some implied merchantability depending on jurisdiction.
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Point. As part of a negotiated contract, some companies might indeed put in guarantees of software quality; I've never worked in the nuclear industry or any other industries where that would be required, so my perspective was a little skewed. But all mass-distributed software I've ever seen or heard of, free or not, has that "no warranty" clause, and only individual contracts are exceptions.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
Anyone who has seen how the software is sausaged knows that. Security flaws will happen, no matter what the lawyers put in the license.
And still, we live in a society. We have to use software, bugs or not.
not possible to guarantee safety
This is a fantastic argument for the universe where Google does not disclose vulnerability until the maintainers had had reasonable time to fix it.
In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix
This program discloses security issues to the projects and only discloses them after they have had a "reasonable" chance to fix it though, and projects can request extensions before disclosure if projects plan to fix it but need more time.
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
Google is a multi-billion dollar company, which is paying people to find these bugs in the first place.
That's a pretty core difference.
Great, so Google is actively spending money on making open source projects better and more secure. And for some reason everyone is now mad at them for it because they didn't also spend additional money making patches themselves. We can absolutely wish and ask that they spend some money and resources on making those patches, but this whole thing feels like the message most corporations are going to take is "don't do anything to contribute to open source projects at all, because if you don't do it just right, they're going to drag you through the mud for it" rather than "submit more patches"
They're actively making open source projects less secure by publishing bugs that the projects don't have the volunteers to fix
I saw another poster say something about "buggy software". All software is buggy.
Corporate Social Responsibility? The assumption is that the work is good for end users. I don't know if that's the case for the maintainers though.
The user is vulnerable while the problem is unfixed. Google publishing a vulnerability doesn't change the existence of the vulnerability. If Google can find it, so can others.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If it is so easy to fix, then why doesn't Google fix it? So far they've spent more effort in spreading knowledge about the vulnerability than fixing it, so I don't agree with your assessment that Google is not actively making the world worse here.
I didn't say it was easy to fix. I said a publication made it easy to find it, if someone wanted to fix something.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
> If Google can find it, so can others.
While true, Only Google has google infrastructure, this presupposes that 100% of all published exploits would be findable.
>If Google can find it, so can others.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Sure, but running a fuzzer on ancient codecs isn't that special. I can't do it, but if I wanted to learn how, codecs would be a great place to start. (in fact, Google did some of their early fuzzing work in 2012-2014 on ffmpeg [1]) Media decoders have been the vector for how many zero interaction, high profile attacks lately? Media decoders were how many of the Macromedia Flash vulnerabilities? Codecs that haven't gotten any new media in decades but are enabled in default builds are a very good place to go looking for issues.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
As clearly stated, most users of ffmpeg are unaware of them using it. Even them knowing about a vulnerability in ffmpeg, they wouldn't know they are affected.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
But how are those companies supposed to know they need to do anything unless someone finds and publicly reports the issue in the first place? Surely we're not advocating for a world where every vendor downstream of the ffmpeg project independently discovers and patches security vulnerabilities without ever reporting the issues upstream right?
They could be, and the chances of that increase immensely once Google publishes it.
I have about 100x as much sympathy for an open source project getting time to fix a security bug than I do a multibillion dollar company with nearly infinite resources essentially blackmailing a small team of developers like this. They could -easily- pay a dev to fix the bug and send the fix to ffmpeg.
Since when are bug reports blackmail? If some retro game enthusiast discovered this bug and made a blog post about it that went to the front page of HN, is that blackmail? If someone running a fuzzer found this bug and dumped a public bug report into github is that blackmail? What if google made this report privately, but didn't say anything about when they would make it public and then just went public at some arbitrary time in the future? How is "heads up, here's a bug we found, here's the reproduction steps for it, we'll file a public bug report on it soon" blackmail?
In my case, yes, but my pipeline is closed. Processes run on isolated instances that are terminated without haste as soon as workflow ends. Even if uncaught fatal errors occur, janitor scripts run to ensure instances are terminated on a fast schedule. This isn't something running on my personal device with random content that was provided by unknown someone on the interwebs.
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
Sure but how.
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
AIUI, (lib)ffmpeg is used by practically everything that does anything with video, including such definitely-security-sensitive things as Chrome, which people use to play untrusted content all the time.
Then maybe the Google chrome devs should submit a PR to ffmpeg.
Sure. And fund them.
hmm, didn't realize chrome was using ffmpeg in the background. That definitely makes it more dangerous than I supposed.
Looks like firefox does the same.
Firefox has moved some parsers to Rust: https://github.com/mozilla/mp4parse-rust
Firefox also does a lot of media decoding in a separate process.
Ffmpeg is a versatile toolkit used in lot of different places.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
There are alternatives such as gstreamer and proprietary options. I can’t give names, but can confirm at least two moderately sized startups that use gstreamer in their media pipeline instead of ffmpeg (and no, they don’t use gst-libav).
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
Upload a video to YouTube or Vimeo. They almost certainly run it through ffmpeg.
ffmpeg is also megabytes of parsing code, whereas tar is barely a parser.
It would be surprising to find memory corruption in tar in 2025, but not in ffmpeg.
If you use a trillion dollar AI to probe open source code in ways that no hacker could, you're kind of unearthing the vulnerabilities yourself if you disclose them.
> My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
[0] https://googleprojectzero.blogspot.com/2025/07/reporting-tra..., discussed at https://news.ycombinator.com/item?id=44724287
[1] https://googleprojectzero.blogspot.com/p/reporting-transpare...
The fact that details of the issue _will_ be disclosed publicly is an implicit threat. Sure it's not an explicit threat, but it's definitely an implicit threat. So the demand, too, is implicit: fix this before we disclose publicly, or else your vulnerability will be public knowledge.
When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers. All of this increases the pressure on the maintainers, and it's fair to call that a "demand" (quotes-included). Note that we are talking about humans who will only have their motivation dwindle: it's easy to say that they should be thick-skinned and ignore issues they can't objectively fix in a timely manner, but it's demoralizing to be called out like that when everyone knows you can't do it, and you are generally doing your best.
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers.
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
Publishing the vulnerability is a demand to fix it. It threatens to cause harm to the reputation of the maintainer if left unfixed.
No, publishing the vulnerability is the right thing to do for a secure world because anyone can find this stuff including nation states that weaponize it. This is a public service. Giving the dev a 90 day pre warn is a courtesy.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
This is all true(maybe not the extortion being worse hard to say), but it doesnt change the fact that publishing the CVE is a demand to fix it.
That is standard practice. It is considered irresponsible to not publicly disclose any vulnerability.
The X days is a concession to the developers that the public disclosure will be delayed to give them an opportunity to address the issue.
> That is standard practice.
It's standard practice for commercially-sponsored software, and it doesn't necessarily fit volunteer maintained software. You can't have the same expectations.
Vulnerabilities should be publicly disclosed. Both closed and open source software are scrutinized by the good and the bad people; sitting on vulnerabilities isn't good.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
Stop being a piece of shit, Google.
So no one should file public bug reports for open source software?
The entire conflict here is that norms about what's considered responsible were developed in a different context, where vulnerability reports were generated at a much lower rate and dedicated CVE-searching teams were much less common. FFmpeg says this was "AI generated bug reports on an obscure 1990s hobby codec"; if that's accurate (I have no reason to doubt it, just no time to go check), I tend to agree that it doesn't make sense to apply the standards that were developed for vulnerabilities like "malicious PNG file crashes the computer when loaded".
I think the discussion on what standard practice should be does need to be had. This seems to be throwing blame at people following the current standard.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
FFMPEG does autodetection of what is inside a file, the extension doesn't really matter. So it's trivial to construct a video file that's labelled .mp4 but is really using the vulnerable codec and triggers its payload upon playing it. (Given ffmpeg is also used to generate thumbnails in Windows if installed, IIRC, just having a trapped video file in a directory could be dangerous.)
It is accurate. This is a codec that was added for archival and digital preservation purposes. It’s like adding a Unicode block for some obscure 4000 year old dead language that we have a scant half dozen examples of writing.
Here's the question:
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Google is a significant contributor to ffmpeg by way of VP9/AV1/AV2. It's not like it's a gaping maw of open-source abuse, the company generally provides real value to the OSS ecosystem at an even lower level than ffmpeg (which is saying a lot, ffmpeg is pretty in-the-weeds already).
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
Then they can damn well pay for fixing them too.
So to be clear, if Google doesn't include patches, you would rather they don't make bugs they find in software public so other people can fix them?
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
That's not a choice. You can decide if Google files bugs like this or not, you can't force them to fix them.
I would love to see Google contribute here, but I think that's a different issue.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
But FFmpeg does not have the resources to fix these at the speed Google is finding them.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
A solution definitely ought to be found. Google putting up a few millionths of a percent of their revenue or so towards fixing the bugs they find in ffmpeg would be the ideal solution here, certainly. Yet it seems unlikely to actually occur.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
Many people are already developing and fixing FFmpeg.
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
Nobody is demanding anything. Google is just disclosing issues.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
From TFA:
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
This doesn't feel like a medium-severity bug, and I think "Perhaps reconsider the severity" is a polite reading. I get that it's a bug either way, but this leaves me with a vague feeling of the ffmpeg maintainer's time being abused.
The vulnerability in question is being severely underestimated. There are many other comments in this thread going into detail. UAF = RCE.
On the other hand, if the bug doesn't get filed, it doesn't get fixed. Sure Google could spend some resources on fixing it themselves, but even if they did would we not likely see a complaint about google flooding the maintainers with PR requests for obscure 30 year old codec bugs? And isn't a PR even more of a demand on the maintainer's time because now there's actual code that needs to be reviewed, tests that need to be run and another person waiting for a response on the other end?
"Given enough eyeballs, every bug is shallow" right? Well, Google just contributed some eyeballs, and now a bug has been made shallow. So what's the actual problem here? If some retro game enthusiast had filed the same but report would that be "abusing" the maintainer's time? I would think not, but then we're saying that a bug report can be "abusive" simply by the virtue of who submits it. And I'm really not sure "don't assign employees to research bugs in your open source dependencies and if you do certainly don't submit bug reports on what you find because that's abusive" is the message we want to be sending to corporations that are using these projects.
I think it’s exceedingly reasonable for a maintainer to dispute the severity of a vulnerability, and to ultimately decide the severity.
If it causes a crash, that's denial of service, so medium would be appropriate. But it's true that medium CVEs aren't that bad in most situations.
If you need this kind of security, build ffmpeg with only decoders you find acceptable
> But appearance is operative: a security issue is something that I (as the maintainer) would need to fix regardless of who reports it
I think this is the heart of the issue and it boils off all of the unimportant details.
If it's a real, serious issue, you want to know about it and you want to fix it. Regardless of who reports it.
If it's a real, but unimportant issue, you probably at least want to track it, but aren't worried about disclosure. Regardless of who reports it.
If it's invalid, or AI slop, you probably just want to close/ignore it. Regardless of who reports it.
It seems entirely irrelevant who is reporting these issues. As a software project, ultimately you make the judgment call about what bugs you fix and what ones you don't.
But if it's a real, serious issue without an easy resolution, who is the burden on? It's not that the maintainers wouldn't fix bugs if they easily could. FFmpeg is provided "as is"[0], so everyone should be responsible for their side of things. It's not like the maintainers dumped their software on every computer and forced people to use it. Google should be responsible for their own security. I'm not adamant that Google should share the patch with others, but it would hardly be an imposition to Google if they did. And yes, I really do intend that you could replace Google with any party, big or small, commercial or noncommercial. It's painful, but no one has any inherent obligations to provide others with software in most circumstances.
[0] More or less. It seems the actual language is shied from. Is there a meaningful difference?
But if no bug report is filed, then only google gets the ability to "be responsible for their own security", everyone else either has to independently discover and then patch the bug themselves, or wait until upstream discovers the bug.
In no reasonable reading of the situation can I see how anything Google has done here has made things worse:
1) Before hand, the bug existed, but was either known by no one, or known only by people exploiting it. The maintainers weren't actively looking at or for this particular bug and so it may have continue to go undiscovered for another 20 years.
2) Then Google was the only one that knew about it (modulo exploiters) and were the only people that could take any steps to protect themselves. The maintainers still don't know so everyone else would remain unprotected until they discover it independently.
3) Now everyone knows about the issue, and are now informed to take whatever actions they deem appropriate to protect themselves. The maintainers know and can choose (or not) to patch the issue, remove the codec or any number of other steps including deciding it's too low priority in their list of todos and advising concerned people to disable/compile it out if they are worried.
#3 is objectively the better situation for everyone except people who would exploit the issue. Would it be even better if Google made a patch and submitted that too? Sure it would. But that doesn't make what they have done worthless or harmful. And more than that, there's nothing that says they can't or won't do that. Submitting a bug report and submitting a fix don't need to happen at the same time.
It's hard enough convincing corporations to spend any resources at all on contributing to upstream. Dragging them through the mud for not submitting patches in addition to any bug reports they file is in my estimation less likely to get you more patches, and more likely to just get you less resources spent on looking for bugs in the first place.
I wasn't really thinking about the disclosure part, although I probably should have. I was focusing on the patching side of things. I think you're correct that disclosure is good, but in that case, I think it increases the burden of those with resources to collaborate to produce a patch.
Well, it's open source and built by volunteers, so nobody is obligated to fix it. If FFmpeg volunteers don't want to fix it or don't have the time/bandwidth to fix it, then they won't fix it. Like any other bug or CVE in any other open source project. The burden doesn't necessarily need to be on anyone.
They aren't obligated to fix CVEs until they're exploited, and then, suddenly, they very much were obligated to fix the CVEs, and their image as FLOSS maintainers and as a project are very much tarnished.
I don't think anyone can force them to fix cve. Software is provided as-is. Can't be more straightforward as that.
This is technically true, but not plausible in the social sense.
OTOH they could disclose security issues AND send patches to close them.
if you've ever read about codependency, "need" is a relative term.
codependency is when someone accepts too much responsibility, in particular responsibility for someone else or other things out of their control.
the answer is to have a "healthy neutrality".
I see you didn't read the article.
The problem isn't Google reporting vulnerabilities. It's Google using AI to find obscure bugs that affect 2 people on the planet, then making a CVE out of it, without putting any effort into fixing it themselves or funding the project. What are the ffmpeg maintainers supposed to do about this? It's a complete waste of everybody's time.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
I don't think that's an accurate description of the full scope of the problem. The codec itself is mostly unused but the code path can possibly be triggered from file fuzzing that ffmpeg uses so a maliciously crafted payload (e.g. any run of ffmpeg that touches user input without disabling this codec) could possibly be exploited.
I feel this comment is far to shallow a take. I would expect that you know better than most of HN, exactly how much a reputation security has as a cost center. Google uses ffmpeg internally, how many millions would they have to spend if they were required to not only create, but maintain ffmpeg themselves? How significant would that cost be at Google's scale?
I dont agree the following framing is accurate, but I can mention it because you've already said the important part (about how this issue exists, and mearly knowing about it doesn't create required work.) But here announcing it, and registering a CVE, Google is starting the clock. By some metrics, it was already running, but the reputational risk clearly was not. This does change priorities, and requires as urgent context switch. neither are free actions, especially not within FOSS.
To me, being someone who believes everyone, individuals and groups, have a responsibility to contribute fairly. I would frame it as Google's behavior gives the appearance weaponizing their cost center externally, given this is something Google could easily fix, but instead they shirked that responsibility to unfunded volunteers.
To be clear, I think Google (Apple, Microsoft, etc.) can and should fund more of the OSS they depend on. But this doesn’t change the fact that vulnerability reports don’t create work per se, they just reveal work that the project can choose to act on or not.
Hopefully, until that changes, more people with influence will keep saying it, and always say it until it stops being true, and important.
So thank you for saying the important thing too! :)
Fully on FFmpeg team side, many companies approach to FOSS is only doing so when it sounds good on their marketing karma, leech otherwise.
Most of them would just pirate in the old days, and most FOSS licences give them clear conscience to behave as always.
Google is, at no cost to FFMPEG:
1) dedicating compute resources to continuously fuzzing the entire project
2) dedicating engineering resources to validating the results and creating accurate and well-informed bug reports (in this case, a seriously underestimated security issue)
3) additionally for codecs that Google likely does not even internally use or compile, purely for the greater good of FFMPEG's user base
Needless to say, while I agree Google has a penny to spare to fund FFMPEG, and should (although they already contribute), I do not agree with funding this maintainer.
Google is:
- choosing to do this of their own volition
- are effectively just using their resources to throw bug reports over the wall unprompted.
- benefiting from the bugs getting fixed, but not contributing to them.
> - benefiting from the bugs getting fixed, but not contributing to them.
I would be very surprised if Google builds this codec when they build ffmpeg. If you run a/v codecs (like ffmpeg) in bulk, the first thing to do is sandbox the hell out of it. The second thing you do is strictly limit the containers and codecs you'll decode. Not very many people need to decode movies from old LucasArts games, for video codecs, you probably only want mpeg 1-4, h.26x, vp8, vp9, av1. And you'll want to have fuzzed those decoders as best you can too.
Nobody should be surprised that there's a security problem in this ancient decoder. Many of the eclectic codecs were written to mimic how the decoders that shipped with content were written, and most of those codecs were written assuming they were decoding a known good file, because why wouldn't they be. There's no shame, that's just how it is... there's too much to proactively investigate, so someone doing fuzzing and writing excellent reports that include diagnosis, specific location of the errors, and a way to reproduce are providing a valuable contribution.
Could they contribute more? Sure. But even if they don't, they've contributed something of value. If the maintainers can't or don't want to address it, that'd be reasonable too.
Google is a contributor to FFMPEG.
FFMPEG, at no cost to Google, provided a core piece of their infrastructure for multiple multi-billion dollar product lines.
Then they can surely also provide a pull request for said CVE.
They could, but there is really no requirement on them to do so. The security flaw was discovered by Google, but it was not created by them.
Equally there is no requirement on ffmpeg to fix these CVEs nor any other.
And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
> They could, but there is really no requirement on them to do so.
I see this sort of sentiment daily. The sentiment that only what is strictly legal or required is what matters.
Sometimes, you know, you have to recognise that there are social norms and being a good person matters and has intrinsic value. A society only governed by what the written law of the land explicitly states is a dystopia worse than hell.
What's "strictly legal or required" of Google here is absolutely nothing. They didn't have to do any auditing or bug hunting. They certainly didn't have to validate or create a proper bug report, and there's no requirement whatsoever that they tell anyone about it at all. They could have found the bug, found it was being actively exploited, made their own internal patch and sat quietly by while other people remained vulnerable. All of that is well within what is "strictly legal or required".
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why? The world is objectively a better place for having this bug report, at least now people know there's something to address.
> And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
What's this even saying?
Then they're free to fork it and never use the upstream again.
Where do you draw the line? Do you want Google to just not inspect any projects that it can't fully commit to maintaining?
Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
Personally, I want the $3.5 Trillion company to do more. So the line should be somewhere else.
So you don't have a line, you just want to move the goalposts and keep moving them?
It is my understanding that the commenters in FFMPEG's favor believe that Google is doing a disservice by finding these security vulnerabilities, as they require volunteer burden to patch, and that they should either:
1) allow the vulnerabilities to remain undiscovered & unpatched zero-days (stop submitting "slop" CVEs.)
2) supply the patches (which i'm sure the goalpost will move to the maintainers being upset that they have to merge them.)
3) fund the project (including the maintainers who clearly misunderstand the severity of the vulnerabilities and describe them as "slop") (no thank you.)
This entire thread defies logic.
What is the mission of Project Zero? Is it to build a vulnerability database, or is it to fix vulnerabilities?
If it's to fix vulnerabilities, it seems within reason to expect a patch. If the reason Google isn't sending a patch is because they truly think the maintainers can fix it better, then that seems fair. But if Google isn't sending a patch because fixing vulns "doesn't scale" then that's some pretty weak sauce.
Maybe part of the solution is creating a separate low priority queue for bug reports from groups that could fix it but chose not to.
To build on that - if it "doesn't scale" for one of the wealthiest companies in the world, it certainly doesn't scale for a volunteer project...
If you are deliberately shipping insecure software, you should stop doing that. In ffmpeg's case, that means either patching the bug, or disabling the codec. They refused to do the latter because they were proud of being able to support an obscure codec. That puts the onus on them to fix the bug in it.
I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
I do think that contributing fuzzing and quality bug reports can be beneficial to a project, but it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry.
Rather than going off and digging up ten time bombs which all start counting down together, how about digging up one and defusing it? Or even just contributing a bit of funding towards the team of people working for free to defuse them?
If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem. Not a lot of people set out to intentionally write insecure code. The only case that immediately comes to mind is the xz backdoor attempt, which again had a root cause of too few maintainers. I think figuring out a way to get constructive resources to these projects would be a much more impressive way to contribute.
This is a company that takes a lot of pride in being the absolute best of the best. Maybe what they're doing can be justified in some way, but I see why maintainers are feeling bullied. Is Google really being excellent here?
The ffmpeg authors aren't "shipping" anything; they're giving away something they make as a hobby with an explicit disclaimer of any kind of fitness for purpose. If someone needs something else, they can pay an engineer to make it for them.
Nah, ffmpeg volunteers dont owe you or Google anything. They are giving you free access to their open project.
> Providing a real CVE is a contribution, not a burden.
Isn't a real CVE (like any bug report) both a contribution and a burden?
Only if you subscribe to the "if we stop testing, the number of cases will drop!" philosophy.
> Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
Re-read the article. There's CVEs and then there's CVEs. This is the former, and they're shoving tons of those down the throats of unpaid volunteers while contributing nothing back.
What Google's effectively doing is like a food safety inspection company going to the local food bank to get the food that they operate their corporate cafeteria on just to save a buck, then calling the health department on a monthly basis to report any and all health violations they think they might have seen, while contributing nothing of help back to the food bank.
I have read the article. The expectation for a tool like ffmpeg is that regardless of what kind of file you put into it, it safely handles it.
This is an actual bug in submitted code. It doesn't matter that it's for some obscure codec, it's technically maintained by the ffmpeg project and is fair game for vulnerability reports.
Given that Google is also a major contributor to open-source video, this is more like a food manufacturer making sure that grocery stores are following health code when they stock their food.
Mind you, the grocery store has no obligation to listen to them in this metaphor and is free to just let the report/CVE sit for a while.
I am unsure why this untruth is being continuously parroted. It is false.
This is exploitable on a majority of systems as the codec is enabled by default. This is a CVE that is being severely underestimated.
This is why many have warned against things like MIT licence. Yes, it gives you source code and does easily get incorporated into a lot of projects but it comes at the cost of potential abuse.
Yes, GPL 3 is a lot ideologically but it was trying to limit excessive leeching.
Now that I have opened the flood gates of a 20 year old debate, time to walk away.
Google Project Zero just looks for security issues in popular open source packages, regardless of if Google itself even uses those packages or not.
So I'm not sure what GPLv3 really has to do with it in this case, if it under was a "No billion dollar company allowed" non-free-but-source-available license, this same thing would have happened if the project was popular enough for Project Zero to have looked at it for security issues.
The difference is that Google does use it, though. They use it heavily. All of us in the video industry do - Google, Amazon, Disney, Sony, Viacom, or whoever. Companies you may have never heard of build it into their solutions that are used by big networks and other streaming services, too.
Right, Google absolutely should fund ffmpeg.
But opening security issues here is not related to that in any way. It's an obscure file format Google definitely doesn't use, the security issue is irrelevant to Google's usages of it.
The critique would make sense if Google was asking for ffmpeg to implement something that Google wanted, instead of sending a patch. But they don't actually care about this one, they aren't actually asking for them to fix it for their benefit, they are sending a notice of a security issue that only affects people who are not Google to ffmpeg.
What in GPL3 or MIT mandates that Google fix this bug and submit a PR or simply sends a bug report and walks away? I don't see how this applies at all.
ffmpeg is already LGPL / GPLv2. How does the license choice factor into this at all?
There is nothing whatsoever that the GPL would do to change this situation. Bringing up the permissive license debate is a non-sequitur here.
AGPL, with no CLA that lets the owners relicense. Then we'll see if the using corporation fully believes in open source.
There's a reason Google turned into year 2000 Microsoft "it's viral!" re. the AGPL. They're less able to ignore the intent of the license and lock away their changes.
A bunch of people who make era-defining software for free. A labor of love.
Another bunch of people who make era-defining software where they extract everything they can. From customers, transactionally. From the first bunch, pure extraction (slavery, anyone?).
Irrespective of what Google does, security research is still useful for all of us.
They could adopt a more flexible policy for FOSS though.
Or they could contribute solutions to said bugs? Its not like they would distract that much from their bottom line
Exactly. The call-out is not "please stop doing security research". It is, "if you have a lot of money to spend on security research, please spend some of it on discovering the bugs, and some on fixing them (or paying us to fix them), instead of all of it on discovering bugs too fast for us to fix them in time".
Google is a major contributor to open-source video, to the point where it would not be viable without them.
Is it? I’ve gotten nothing but headaches from these automated CVE-seeking teams.
You got lower chances of getting hacked by a random file on the internet. At Project Zero level they're also not CVE seeking - it doesn't even matter at that scale, it's not an independent trying to become known.
I have yet to see one on any project I’ve been attached to that was actually exploitable under real circumstances. But the CVE hunting teams treat them all as if they were.
It's as useful as brute forcing one of your neighbor's 100 online passwords every day and writing it on the door of a random supermarket.
It's hard to find an easier good vs evil distinction than between Google and literally anybody else.
Microsoft ♥ Linux?
Vim vs. Emacs
Facebook?
Probably a tie!
> era-defining software for free
chill, nobody knows what ffmpeg is
Is this sarcasm? While it may be true that my mother does not know what ffmpeg is I'm almost positive she interacts with stuff that uses it literally every single day.
...every media post on IG/FB/X/YT/news sites/AI.
It'd be a silly license or condition, but, a license that says employees of companies in the S&P500 cant file bugs without an $X contribution, and cant expect a response in under Y days without a larger one, would be a funny way to combat it. Companies have no problem making software non-free or AGPL when it becomes inconvenient so maybe they can put up or shut up.
Where in this bug report was there any expectation for a response? They filed a private but report and have a policy of making private reports public in 90 days whether or not they get a response. How did the OSS world go from "with enough eyes all bugs are shallow" to "filing a bug report is demanding I respond to you"?
I get the idea of publicly disclosing security issues to large well funded companies that need to be incentivized to fix them. But I think open source has a good argument that in terms of risk reward tradeoff, publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days. The security issue should certainly be disclosed - when its responsible to do so.
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
Then everybody wins.
> In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days.
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
> ...then they could very well contribute by submitting a patch along with their issue report.
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.
Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.
Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.
Part of the problem is that many of the issues are not really critical, no?
Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.
For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.
Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.
As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.
> publicly disclosing these for small resource constrained open source project probably creates a lot more risk than reward.
Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.
You are missing the tiny little fact that apparently a large portion of infosec people are of the opinion that insecure software must not exist. At any cost. No shades of gray.
"They could shut down three product lines with an email"
If you (Amazon, in this case) can put it that way, it seems like throwing them 10 or 20 thousand a year would simply be a good insurance policy! Any benefits you might get in goodwill and influence are a bonus.
How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
But on a more serious note, it is crazy that between Google and Amazon they can not fund them with 50k each per year, so that they can pay people to work on this.
Specially Google, with Youtube, they can very easily pay them more. 100k~200k easily.
What's wild is the importance and impact of the work/tool. And for google and Amazon, $50k-$100k/yr isn't even a single engineer salary to them ...
And they get the tool + community good will, all for a rounding error on any part of their budgets...
Exactly.
That is why I said easily 100~200k. It will be a rounding error for them.
It is actually crazy that Google is not already hiring the main dev to work on ffmpeg with all the use they give it on Youtube.
I also wonder if it is maybe used by Netflix also.
I'm just imagining them having 4 or 5 250K a year security workers pumping out endless bug reports for one guy in Norway who works on ffmpeg at night and weekends for free because he loves open source and demanding him meet their program deadlines lol
> I also wonder if it is maybe used by Netflix also.
They do and it is.
https://netflixtechblog.com/the-making-of-ves-the-cosmos-mic...
https://netflixtechblog.com/for-your-eyes-only-improving-net...
It is really sad that none of Netflix or Google had hired a couple of devs to work full time on ffmpeg.
I'd be amazed if any of the FAANG's isn't putting real money through ffmpeg somewhere.
So yeah it'd be nice if they put some real money (to ffmpeg - wouldn't be their coffee allowance to any of them) into it.
Almost everything that touches video at some point uses it.
Double funny considering new-grads who may polish up some UI features or rewrite components for the 10th time will get paid 200-400K TC at these same companies. Evidently these companies value something other than straight labor.
Yeah, sadly crazy.
If I had built up 500 million in a savings account to buy a yacht, giving 50k of that to FFmpeg devs would put off my ability to buy a yacht by nearly a whole day.
Boltzmann brain-wise it clearly doesn't make sense to wait that long.
Cheating on your wife would put off your ability to buy that yacht off by 20 years, and yet here we are.
> How do you think Jeff got a 500 million dollars yacht? Not by writing checks.
A rising tide lifts all yachts. If he had written the check, my instinct tells me, he would have enough for two yachts. Goodwill is an actual line item on 10Q's and 10K's. I don't know why companies think it's worth ignoring.
What is the point of Google's Project Zero?
I'm not being dismissive. I understand the imperetive of identifying and fixing vulnerabilities. I also understand the detrimental impact that these problems can potentially have on Google.
What I don't understand is the choice to have a public facing project about this. Can anyone shine a light on this?
PR.
And pushing forward the idea that "responsible disclosure" doesn't mean the software creator can just sit on a bug for as long as they want and act superior and indignant when the researcher gives up and publishes anyway because the creator is dragging their ass.
Project Zero's public existence came out of the post-Snowden period where Google was publicly pissed at the NSA/etc for spying on them (e.g. by tapping their fiber links).
A lot of their research involves stuff they personally benefit from if they were secure. ffmpeg, libxml2, various kinds of mobile device firmware, Linux kernels and userspace components, you name it.
Their security team gaining experience on other projects can teach them some more diversity in terms of (malware) approaches and vulnerability classes, which can in turn be used to secure their own software better.
For other projects there's some vanity/reputation to be gained. Having some big names with impressive resumes publicly talk about their work can help attract talent.
Lastly, Google got real upset that the NSA spied on them (without their knowledge, they can't help against warrants of course).
Then again, there's probably also some Silicon Valley bullshit money being thrown around. Makes you wonder why they don't invest a little bit more to pay someone to submit a fix.
I would imagine it's mostly a PR/marketing thing. That way the researchers can point to being part of something other people know about, and Google gets positive PR (though maybe not in this case) for spending resources on making software in general more secure.
you could not imagine and just read sources like https://en.wikipedia.org/wiki/Project_Zero
Here's a thread by Google's head of security that notes the ways they've contributed to FFmpeg over the years:
https://x.com/argvee/status/1986194852669964528
Unfortunately, nobody not on Twitter can see anything except the first message, which just appears to be some corporate-deflection-speak.
It’s a trillion dollar company. I’m sure they could find rifle through their couch cushions and find more than enough money and under-utilised devs to contribute finance or patches.
Try xcancel instead: https://xcancel.com/argvee/status/1986194852669964528
Google is one of the largest contributors to OSS in the world, so they're already throwing literal millions at OSS?
They have thousands of highly paid employees working on open source; they are spending at least 1 billion per year.
Nice to see coming from Google. Some other current and form Google employees were unfortunately not as professional in their interactions on Twitter in regards to this issue.
They should set up a foundation/LLC if they don't have one already and require a support contract for fixing any bugs in niche codecs. Target the 95%-98% use cases for "free" work. If someone gives them a CVE on something ancient, just note it with an issue tracker and that the problem is currently unsponsored. Have default build flags to omit all the ancient/buggy stuff. If nobody is willing to pay to fix all the ancient crap, then nobody should be using it. But if someone is willing to pay, then it gets fixed.
The vulnerability in question is a Use After Free. Google used AI to find this bug, it would've taken them 3 seconds to fix it.
Burning cash to generate spam bug reports to burden volunteer projects when you have the extra cash to burn to just fix the damn issue leaves a very sour taste in my mouth.
If it takes 3 seconds to fix it, then how is this some massive burden on the maintainers? The bug report pointed to the relevant lines, the maintainers are the most familiar with the code and it probably would have taken them 1.5 seconds to not only fix it, but validate the fix. It probably took more time to write up the complaint about the bugs than to implement the fix.
Notably, the vulnerability is also in a part which isn't included by default and nobody uses. I'm not sure that even warrants a CVE? A simple bug report would have probably been fine. If they think this is really a CVE, a bug fix commit would have been warranted.
One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
That's the difference between "it may or may not be that there's someone who cares" versus "no one should be running this software anywhere in the general vicinity of untrusted inputs".
> One problem here is that CVE scoring is basically entirely bugged, something scored 8.7 could be an RCE exploit or a "may be able to waste CPU" issue.
+100000
My favorite 8.x or higher CVEs are the ones where you would have to take untrusted user input, bypass all the standard ways to ingest and handle that type of data, and pass it into some internal function of a library. And then the end result is that a regex call becomes more expensive.
You’re right about scoring, at least largely. Let’s not conflate the CVE system and the CVSS system, though. They are related but distinct. CVE is just an identifier system.
It is included in most builds of ffmpeg, for example in most Linux packages or in Windows build linked to on ffmpeg.org that I use. But yeah, it's a very niche format that nobody uses.
It is included by default
AIUI there's no such thing as "really a CVE". A CVE is merely a standardized identifier for a bug so you can call it "CVE-2025-XXXXX" rather than "that use-after-free Google found in ffmpeg with AI." It doesn't imply anything else about the bug, except that it may impact security. The Linux kernel assigns one to every bugfix that may impact security (which is most kernel bugs) to avoid controversy about whether they should be assigned.
Use After Free takes 3 seconds to fix if you defer free until the end of the program. If you have to do something else, or you don't want to leak memory, then it probably takes longer than 3 seconds.
Probably the right solution is to disable this codec. You should have to make a choice to compile with it; although if you're running ffmpeg in a context where security matters, you really should be hand picking the enabled codecs anyway.
Yes - more than a sour taste. This is hideous behavior. It is the polar opposite of everything intelligent engineers have understood regarding free-and-open software for decades.
I don't understand the rational for announcing that a vulnerability in project X was discovered before the patch is released. I read the project zero blogspot announcement but it doesn't make much sense to me. Google claims this is help downsteam users but that feels like a largely non-issue to me.
If you announce a vulnerability (unspecified) is found in a project before the patch is released doesn't that just incentivize bad actors to now direct their efforts at finding a vulnerability in that project?
Maybe for a small project? I think the difference here is rather minimal. Everybody "knows" code often has security bugs so this announcement wouldn't technically be new information. For a large project such as ffmpeg, I doubt there is a lack of effort in finding exploits in ffmpeg given how widely it is used.
I don't see why actors would suddenly reallocate large amounts of effort especially since a patch is now known to be coming for the issue that was found and thus the usefulness of the bug (even if found) is rather limited.
The message sounds like a human warning of someone is being fed-up with feeling taken advantage of. They see the profit being made, and they're not even getting drops of it.
If this isn't addressed, it makes this repo a target for actors that don't care about the welfare of Amazon, Google etc.
It seems quite predictable that someone will see this as a human weakness and try to exploit it, my question is whether we'll get them the support they need before the consequence hits, or whether we'll claim this was a surprise after the fact.
> They see the profit being made, and they're not even getting drops of it.
Doesn't google routinely contribute to ffmpeg? Certainly there's a lot of commits from `@google.com` email addresses here: https://git.ffmpeg.org/gitweb/ffmpeg.git?a=search&h=HEAD&st=...
Is it time for FFmpeg to relicense as AGPL? That'd be fun to witness.
Watch places like Amazon and Google suddenly stop updating and trying to find alternatives.
Like how Apple stopped using up to date the GNU tools in 2008 because of GPL3. That moved showed me then that Apple did not want you to use your computer as your computer.
Well, to continue that timeline. "Big Tech" freezes their version to the last gpl'ed version, and each commences their own (non-trivial effort) to make their own version (assuming the last gpl'ed version was not feature-complete for all their future uses).
And of course, they won't share with each other. So another driver would be fear of a slight competitive disadvantage vs other-big-tech-monstrosity having a better version.
Now, in this scenario, some tech CEO, somewhere has this brilliant bright spark.
"Hey, instead of dumping all these manhours & resources into DIYing it, with no guarantee that we still won't be left behind - why don't we just throw 100k at the original oss project. We'll milk the publicity, and ... we won't have to do the work, and ... my competitors won't be able to use it"
I quite like this scenario.
I think this would be hard. It also makes not a whole lot of sense IMO.
People need to think about what licence they want to use for a project.
It'll just get forked (see terraform)
Terraform didn't get licensed as AGPL, just some weird proprietary license.
It'll still cause Google and many others to panic, but weird and custom licenses are even worse for attracting business than open source ones.
Never work for free. It's a complete market distortion and leads to bad actors taking advantage of you and your work.
Sometimes it's hard: for many kinds of projects, I don't think anyone would use them if they were not open source (or at least source-available). Just like I wouldn't use a proprietary password manager, and I wouldn't use WhatsApp if I had a choice. Rather I use Signal because it's open source.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
You can just use the GPL, then it's free, but your labour cannot be so easily profited from by big corps
But it can be profited for not-so-big corps, so I'm still working for free.
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
That's fine. Are they required to work for Google? I mean, they are independent and can decide on their own.
Corporations extract a ton of value from projects like ffmpeg. They can either pay an employee to fix the issues or setup some sort of contract with members of the community to fix bugs or make feature enhancements.
There is precedent for this: https://sqlite.org/consortium.html
I love the spirit of working for free on a project of passion. But yes it only takes a few bad actors to totally exploit it.
Yep.
We're well past the point that any serious security team should be able to submit a fix along with a bug report.
Does Google seriously not have a whole team of people who help maintain ffmpeg?
https://github.com/search?q=repo%3AFFmpeg%2FFFmpeg+google.co...
Yes. But they don’t upstream. Why would they?
Great, they can fix the bugs being filed by another part of their company
So would you rather Google have a secure ffmpeg while us plebian individual users continue to have an insecure ffmpeg?
They probably fixed in the internal version.
> “The position of the FFmpeg X account is that somehow disclosing vulnerabilities is a bad thing. Google provides more assistance to open source software projects than almost any other organization, and these debates are more likely to drive away potential sponsors than to attract them.”
This position likely to drive away maintainers. Generally the maintainers need these projects less than the big companies that use them. I'm not sure what Google's endgame is
I doubt there's an endgame in mind. It's probably small teams trying to optimize their quarterly KPIs
> FFmpeg X account is that somehow disclosing vulnerabilities is a bad thing
I mean, I follow that account and never got this impression from them at all.
Is it unreasonable to ask that if a massive company funds someone to find a CVE in an open source project, they should also submit a patch? Google is a search company. Seems kind of... evil... to pay your devs to find holes in something with nothing to do with searching, then refuse to pay them to fix the problem they noticed.
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions.
The recent iOS zero-day (CVE-2025-43300) targeted the rarely used DNG image format. How long before this FFMPEG vulnerability is exploited to compromise legacy devices in the wild, I wonder?
I’m not a fan of this grandstanding for arguably questionable funding. (I surely would not fund those who believe these issues are slop.) I’d like to think most contributors already understand the severity and genuinely care about keeping FFMPEG secure.
Bugs in little-used corners of the project are a massive red flag, that's how some of the most serious OpenSSL bugs have emerged. If the code is in there, and someone can trigger it with a crafted input, then it is as bad as any other bug.
Honestly, I kind of think that ffmpeg should just document that it's not secure and that you're expected to run it in a sandbox if you plan to use it on possibly-malicious input. All the big cloud users and browsers are doing this already, so it would hardly even change anything.
ffmpeg is complaining that security bugs are such a drag that it's driving people away from their hobby/passion projects. Well, if fixing security bugs isn't your passion, why not just say that? Say it's not your priority, and if someone else wants it to be a priority, they can write the patches. Problem solved?
There are some rhetorical slights of hand. If Google does the work to find and report vulnerabilities, great. Nice contribution regardless of who provides it. The OSS developer can ignore it by accepting the consequences that will be. They are not forced to fix it except by themselves.
The many large corporations should be funding these tools they depend on to increase time allocations and thus ability to be responsive but this isn't an either/or. These type of thinking erode the communities of such projects and minds of the contributors.
FWIW I've totally been that developer trapped in that perspective so I empathize, there are simply better mental stances available.
I would suggest that FFmpeg spin up a commercial arm that gets support contracts with Google, Amazon, etc, but with a tight leash so that it does not undermine the open source project. Would need clean guidance as to what the commercial arm does and does not.
Probably could pull in millions per year.
While I don't think FFmpeg's response is a great one ("fund us or stop submitting bugs"); I think Google is being pretty shitty here. For a company that prides itself in its engineering prowess and contributions to the OSS community (as they like to remind me all the time) to behave this way is just all around shitty.
Submit the bug AND the patch and be done with it; don't make it someone else's problem when it's an OSS library/tool. A for-profit vendor? Absolutely. But this? Hell naw.
If FFmpeg had a commercial operation it would constantly be getting sued for IP issues.
In general it is not good when companies get too much power. See how shopify is eating away the ruby infrastructure right now after RubyCentral panicked when shopify blackmailed the community by taking away funding. Of course it is their money, but the whole ecosystem suddenly submitting to one huge company, really really sucks to no ends. Hobbyists are at the weakest here.
If it just were that simple. The reality is that this is a very slippery slope and you won’t get a support contract just like that with a “tight leash”
I understand ffmpeg being angry at the workload but this is how it is with large open source projects. Ffmpeg has no obligation to fix any of this. Open source is a gift and is provided as is. If Google demanded a fix I could see this being an issue. As it is right now it just seems like a bad look. If they wanted compensation then they should change the model, there's nothing wrong with that. Google found a bug, they reported it. If it's a valid bug then it's a valid bug end of story. Software owes it to its users to be secure, but again it's up to the maintainers if they also believe that. Maybe this pushes Google to make an alternative, which I'd be excited for.
> Software owes it to its users to be secure.
There is no such obligation.
There is no warranty and software is provided AS-IS explicitly by the license.
> Google found a bug
That does not impact their business or their operations in any way whatsoever.
> If it's a valid bug then it's a valid bug end of story.
This isn't a binary. It's why CVEs have a whole sordid scoring system to go along with them.
> Software owes it to its users to be secure
ffmpeg owes me nothing. I haven't paid them a dime.
It doesn’t matter if it affects their business or not. They found an issue and they reported it. Ffmpeg could request that they report it privately perhaps. Google has a moral duty to report the bug.
Software should be correct and secure. Of course this can’t always be the case but it’s what we should strive for. I think that’s baseline
> ffmpeg owes me nothing. I haven't paid them a dime.
That is true. At the same time Google also does not owe the ffmpeg devs anything either. It applies both ways. The whole "pay us or we won't fix this" makes no sense.
> Google also does not owe the ffmpeg devs anything either.
Then they can stop reporting bugs with their assinine one size fits all "policy." It's unwelcome and unnecessary.
> It applies both ways.
The difference is I do not presume things upon the ffmpeg developers. I just use their software.
> The whole "pay us or we won't fix this" makes no sense.
Pay us or stop reporting obscure bugs in unused codecs found using "AI" scanning, or at least, if you do, then change your disclosure policy for those "bugs." That's the actual argument and is far more reasonable.
I for one welcome it. I want to know if there are some vulnerabilities in the software I use.
> That does not impact their business or their operations in any way whatsoever.
I don't know what tools and backends they use exactly, but working purely by statistics, I'm sure some place in Google's massive cloud compute empire is relying on ffmpeg to process data from the internet.
And they're processing old LucasArts codec videos with it? Which is the specific bug report in question.
>Ffmpeg has no obligation to fix any of this
I read this as nobody wants CVEs open on their product, so you might feel forced to fix them. I find it more understandable if we talk about web frameworks: Wordpress don't want security CVEs open for months or years, or users would be upset they introduce new features while neglecting safety.
I am a nobody, and whenever I found a bug I work extra to attach a fix in the same issue. Google should do the same.
Why is there an onus on Google to fix this? Bug bounty hunters aren’t required to submit a patch even when the target is open source.
Now should Google? Probably, it would be nice but no one has to. The gift from Google is the discovery of the bug.
Looks like this was a security issue.
I don't consider a security issue to be a "standard bug." I need to look at it, and [maybe] fix it, regardless of who reported it.
But in my projects, I have gotten requests (sometimes, demands) that I change things like the published API (a general-purpose API), to optimize some niche functionality for one user.
I'll usually politely decline these, and respond with an explanation as to why, along with suggestions for them to add it, after the fact.
It’s a security issue for a stream type almost nobody uses. It’s a little like saying your graphics program in 2025 is exploitable by a malformed PCX file, or your music player has a security bug only when playing an Impulse Tracker module.
Sure, triage it. It shouldn’t be publicly disclosed within a week of the report though, because the fix is still a relatively low priority.
Security is adversarial. It doesn't matter whether the users intentionally use the vulnerable codec. What matters is whether an adversary can make the users to use it. Given the codec is compiled in by default on Ubuntu, and given that IIUC the bug would already be triggered by ffmpeg's file format probing, it seems pretty likely that the answer to that is yes.
Yes, security is by definition adversarial. Thanks for the most basic lesson.
How are you getting ffmpeg to process a stream or file type different from the one you’re expecting? Most use cases of ffmpeg are against known input and known output types. If you’re just stuffing user-supplied files through your tools, then yes you have a different threat model.
> How are you getting ffmpeg to process a stream or file type different from the one you’re expecting?
... That is how ffmpeg works? With default settings it auto-detects the input codec from the bitstream, and the output codec from the extension. You have to go out of your way to force the input codec and disable the auto-detection, and I don't think most software using ffmpeg as a backend would force the user to manually do it, because users can't be trusted to know those details.
If no one uses the stream type, then not fixing the bug won't hurt.
The people who do use the stream type are at risk, and have been at risk all along. They need to stop using the stream type, or get the bug fixed, or triage the but as not exploitable.
People volunteer to make billionaires even more profit - crazy world. Who even have the time and resources to volunteer so much??? I don't get all this at all.
The Tragedy of the Bazaar.
FFmpeg should just dual license at this point. If you're wanting shit fixed. You pay for it (based on usage) or GTFO. Should solve all of the current issues around this.
You mean, Google reports a bug, and ffmpeg devs say "GTFO"? Let's assume this is a real bug: is that what you would the ffmpeg developers to say to Google?
I absolutely understand the issue that a filthy-rich company tries to leech off of real unpaid humans. I don't understand how that issue leads to "GTFO, we won't fix these bugs". That makes no sense to me.
Is it me or the linked article seems to be heavily co-authored by AI. The cadence is very monotonic and dull.
bug reports are still good... but yeah they really should contribute to development too
What happens if ffmpeg ignores (after triage) the minor CVEs?
Life goes on.
I am fairly confident that this article is largely AI-generated. More generally, the whole site appears to be heavy on AI slop, e.g.: https://thenewstack.io/how-ai-is-pushing-kubernetes-storage-...
And maybe it's fine to have AI-generated articles that summarize Twitter threads for HN, but this is not a good summarization of the discussion that unfolded in the wake of this complaint. For one, it doesn't mention a reply from Google security, which you would think should be pretty relevant here.
Like the bug report in question... poetic.
The bug report in question is obviously written by a human:
https://issuetracker.google.com/issues/440183164?pli=1
It's of excellent quality. They've made things about as easy for the FFmpeg devs as possible without actually fixing the bug, which might entail legal complications they want to avoid.
AI was only used to find the bug.
why don't they just ignore Goole and work at their own pace?
This whole thing is sure a tangle.
Open source projects want people to use their work. Google wants bugs -- especially security ones -- found and fixed fast. Both goals make sense. The tension starts when open source developers expect payment for fixes, and corporations like Google expect fixes for free.
Paying for bug fixes sounds fair, but it risks turning incentives upside down. If security reports start paying the bills, some maintainers might quietly hope for more vulnerabilities to patch. That's a dangerous feedback loop.
On the other hand, Google funding open source directly isn't automatically better. Money always comes with strings. Funding lets Google nudge project priorities, intentionally or not -- and suddenly the "open" ecosystem starts bending toward corporate interests.
There's no neat solution. Software wants to be used. Bugs want to be found and fixed. But good faith and shared responsibility are supposed to be the glue that holds the open source world together.
Maybe the simplest fix right now is cultural, not technical: fewer arguments on Twitter, more collaboration, and more gratitude. If you rely on open source, donate to the maintainers who make your life easier. The ecosystem stays healthy when we feed it, not when we fight over it.
Amusing. I suppose the intended optional behavior is for Google to fix internally then run the public PR. Less optimal for us normal users since the security issue will be visible publicly in the PR until merging, though it won't affect Google (who will carry the fixed code before disclosure).
Wouldn't they just fork it, fix their own bugs and stop contributing at all?
Forking puts you in another hell as Google. Now you have to pay someone to maintain your fork! Maybe for a project that’s well and fully complete that’s OK. But something like FFmpeg is gonna get updated all the time, as the specs for video codecs are tweaked or released.
Their choice becomes to: - maintain a complex fork, constantly integrating from upstream. - Or pin to some old version and maybe go through a Herculean effort to rebase when something they truly must have merges upstream. - Or genuinely fork it and employ an expert in this highly specific domain to write what will often end up being parallel features and security patches to mainline FFmpeg.
Or, of course, pay someone in doing OSS to fix it in mainline. Which is the beauty of open source; that’s genuinely the least painful option, and also happens to be the one that benefits the community the most.
If you're going to fix the bug, why not in the main project?
Any time I have tried to fix a bug in an open source project I was immediately struck down with abusive attitudes about how I didn't do something exactly the way they wanted it that isn't really documented.
If that's what I have to expect, I'd rather not even interact with them at all.
I don't think this is what typically happens. Many of my bug reports were handled.
For instance, I reported to the xorg-bug tracker that one app behaved oddly when I did --version on it. I was batch-reporting all xorg-applications via a ruby script.
Alan Coopersmith, the elderly hero that he is, fixed this not long after my report. (It was a real bug; granted, a super-small one, but still.)
I could give many more examples here. (I don't remember the exact date but I think I reported this within the last 3 years or so. Unfortunately reporting bugs in xorg-apps is ... a bit archaic. I also stopped reporting bugs to KDE because I hate bugzilla. Github issues spoiled me, they are so easy and convenient to use.)
I feel you, and that's a different issue than the one in this thread, which is in general if maintainers ignore bug reports, their projects will be forked and the issue fixed anyway but not in the main project.
If you really care, I would suggest helping with documenting how the process should work for others to reference going forward.
I would if people were not abusive to me in the first place, but that attitude just turns me off to the entire project.
That costs cash and the big tech companies are a little short at the moment.
They undoubtedly already maintain a fork of the project, considering that they have a private compression accelerator that nobody else has access to.
Probably what they want to do once the original project burns out
Google internally maintaining a fork that attempts to track upstream has a ongoing cost that increases over time
vs. spamming OSS maintainers with slop reports costs Google nothing
Is there really slop here though? It sounds like the specific case identified was a real use after free in an obscure file format but which is enabled by default.
If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
This is dumb. Obscurity doesn’t create security. It’s unfortunate if ffmpeg doesn’t have the money to fix reported bugs but that doesn’t mean they should be ignorant of them. I don’t see any entitlement out of Google either - I expected this article would have a GH issue thread with a whiny YouTube engineer yelling at maintainers.
The first thing you can do is actually read the article. The question is not about the security reports but Google's policy on disclosing the vulnerability after x days. It works for crazy lazy corps. But not for OSS projects.
In practice, it doesn't matter all that much whether the software project containing the vulnerability has the resources to fix it: if a vulnerability is left in the software, undisclosed to the public, the impact to the users is all the same.
I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence. Certainly, ffmpeg developers do not owe security to their users, but security researchers consider that they have a duty to disclose them, even if they go unfixed (and I think most people would prefer to know an unfixed vulnerability exists than to get hit by a 0-day attack). There's gotta be a point where you disclose a vulnerability, the deadline can never be indefinite, otherwise you're just very likely allowing 0-day attacks to occur (in fact, I would think that if this whole thing never happened and we instead got headlines in a year saying "GOOGLE SAT ON CRITICAL VULNERABILITY INVOLVED IN MASSIVE HACK" people would consider what Google did to be far worse).
To be clear, I do in fact think it would be very much best if Google were to use a few millionths of a percent of their revenue to fund ffmpeg, or at least make patches for vulnerabilities. But regardless of how much you criticize the lack of patches accompanying vulnerability reports, I would find it much worse if Google were to instead not report or disclose the vulnerability at all, even if they did so at the request of developers saying they lacked resources to fix vulnerabilities.
Agreed that obscurity is not security. However we don't want to make it easy for hackers to get a catalog of vulnerabilities to pick and choose from. I think the issue is public disclosure of vulnerabilities after a deadline. The hobbyists can't just keep up.
Hmmmmmm. I can understand both points but ...
Google is under no obligation to work on FFmpeg.
Google leveraging AI to spam ffmpeg devs with bugs that range from real to obscure to wrongly reported may be annoying. But even then I still don't think Google is to be held accountable for reporting bugs nor is it required to fix bugs. Note: I do think Google should help pay for costs and what not. If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that. Even then they are not responsible for bugs in ffmpeg. And IF the bug report is valid, then I also see no problem.
The article also confuses things. For instance:
"Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers"
How could Google do that? It is not Google's decision. That is up to volunteers. If they refuse to fix bug reports reported from Google then this is fine. But it is THEIR decision, not Google.
"With this policy change, GPZ announces that it has reported an issue on a specific project within a week of discovery, and the security standard 90-day disclosure clock then starts, regardless of whether a patch is available or not."
Well, many opinions here. I think ALL bugs and exploits should be INSTANTLY AND WITHOUT ANY DELAY, be made fully transparent and public. I understand the other side of the medal too, bla bla we need time to fix it bla bla. I totally understand it. Even then I believe the only truthful, honest way to deal with this, is 100% transparency at all times. This includes when there are negative side effects too, such as open holes. I believe in transparency, not in secrecy. There can not be any compromise here IMO.
"Many volunteer open source program maintainers and developers feel this is massively unfair to put them under such pressure when Google has billions to address the problem."
So what? Google reports issues. You can either fix that or not. Either way is a strategy. It is not Google's fault when software can be exploited, unless they wrote the code. Conversely, the bug or flaw would still exist in the code EVEN IF GOOGLE WOULD NOT REPORT IT. So I don't understand this part. I totally understand the issue of Google being greedy, but this here is not solely about Google's greed. This is also how a project deals with (real) issues (if they are not real then you can ask Google why they send out so much spam).
That Google abuses AI to spam down real human beings is evil and shabby. I am all for ending Google on this planet - it does so much evil. But either it is a bug, or not. I don't understand the opinion of ffmpeg devs "because it is Google, we want zero bug reports". That just makes no sense.
"The fundamental problem remains that the FFmpeg team lacks the financial and developer resources to address a flood of AI-created CVEs."
Well, that is more an issue in how to handle Google spamming down people. Sue them in court so that they stop spamming. But if it is a legit bug report, why is that a problem? Are ffmpeg devs concerned about the code quality being bad? If it is about money then even though I think all of Google's assets should be seized and the CEOs that have done so much evil in the last 20 years be put to court, it really is not their responsibility to fix anything written by others. That's just not how software engineering works; it makes no sense. It seems people confuse ethics with responsibilities here. The GPL doesn't mandate code fixing to be done; it mandates that if you publish a derivative etc... of the code, that code has to be published under the same licence and made available to people. That's about it, give or take. It doesn't say corporations or anyone else HAS to fix something.
"On the other hand, security experts are certainly right in thinking that FFmpeg is a critical part of the Internet’s technology framework and that security issues do need to be made public responsibly and addressed."
I am all for that too, but even stricter - all security issues are to be made public instantly, without delay, fully and completely. I went to open source because I got tired of Microsoft. Why would I want to go back to evil? Not being transparent here is no valid excuse IMO.
"The reality is, however, that without more support from the trillion-dollar companies that profit from open source, many woefully underfunded, volunteer-driven critical open-source projects will no longer be maintained at all."
Wait - so it is Google's fault if projects die due to lack of funding? How does that explanation work?
You can choose another licence model. Many choose BSD/MIT. Others choose GPL. And so forth.
"For example, Wellnhofer has said he will no longer maintain libxml2 in December. Libxml2 is a critical library in all web browsers, web servers, LibreOffice and numerous Linux packages. We don’t need any more arguments; we need real support for critical open source programs before we have another major security breach."
Yes, that is a problem - the funding part. I completely agree. I still don't understand the "logic" of trying to force corporations to have to do so when they are not obliged. If you don't want corporations to use your code, specify that in the licence. The GPL does not do that. I am confused about this "debate" because it makes no real sense to me from an objective point of view. The only part that I can understand pisses off real humans is when Google uses AI as a pester-spam attack orgy. Hopefully a court agrees and spits up any company with more than 100 developers into smaller entities the moment they use AI to spam real human beings.
Its a special kind of irony to post AI slop complaining about someone's ai slop that isn't actually ai slop just devs whining about being expected to maintain their code instead of being able to extort the messengers to do the work for them.
If they're not being paid, they're under no obligation to "maintain their code". If you don't like it, don't use ffmpeg.
It's not "whining" to refuse to do unpaid labor for the benefit of someone else - especially when the someone else is as well-resourced as Google.
“How dare ffmpeg be so arrogant! Don’t they know who we are? Fork ffmpeg and kill the project! I grant a budget of 30 million to crush this dissent! Open source projects must know who’s boss! I’ll stomp em like a union!”
…. overheard at a meeting of CEO and CTO at generic evil mega tech corp recently.
They obviously need to be reminded that the only reason Google has to care about FLOSS projects is when they can effectively use them to create an advertising panopticon under the company's complete control.
They probably want to drown you in CVEs to force deprecation on the world and everybody into their system like they do with everything else they touch.
Google is evil.
Just mark CVEs as bugs and get to them when you can. In this case, if Google doesn't like it, then so be it. It'll get fixed eventually. Don't like how long it takes? Pay someone to contribute back. Until then, hurry up and wait.
That’s how you get your open source software removed from distributions and eventually forked.
Forked by people who are quicker at fixing security vulnerabilities than the original maintainers?
Sure, for some definition of “vulnerability.” And only doing that, nothing more.
And that's a problem?
Please bro, please, fix our bugs bro, just this one bug bro, last one I swear, you and I will make big money, you are the best bro, I love you bro. -- big tech companies
Google might be aiming to replace ffmpeg as the world's best media professor. Remember how Jia Tan (under different names) flooded xz with work before stepping up as a maintainer.
Google, through YouTube and YouTube TV, already runs one of the most significant video processing lines of business in the world. If they had any interest in supplanting FFmpeg with their own software stack, they wouldn't need to muck around with CVEs to do so.
Not for themselves, but for everyone else for some reason.
Though it's more likely there's just some team tasked with shoving AI into vulnerability hunting.
FFmpeg should stop fixing security bugs reported by Google, MS, Amazon, Meta etc. and instead wait for security patches from them. If FFmpeg maintainers will leave it exposed, those companies will rush to fixing it, because they'd be screwed otherwise. Every single one of them is dependent on FFmpeg exactly as shown in https://xkcd.com/2347/
I understand the problem of corporations leeching off of the community here.
I still fail to see why "ffmpeg is not allowed to fix bugs reported by corporations" is a good strategy. To me this sounds not logical.
Because they are making more money in profit than some mid-sized American cities' economies do in a year while contributing nothing back. If they don't want massive security vulnerabilities in their services using FFmpeg, maybe they need to pony up about .1 seconds' worth of their quarterly earnings to the project either in cash or in manpower.
It's not FFmpeg's problem if someone uses a vulnerability to pwn YouTube, it's Google's problem.
Also, in the article, they mention that Google's using AI to look for bugs and report them, and one of them that it found was a problem in the code that handles the rendering of a few frames of a game from 1995. That sort of slop isn't helping anyone. It's throwing the signal-to-noise ratio of the bug filings way the hell off.
this is why you should release your opensource project with the license of being free only for individual, not for enterprises.
enterprise must pay.
If it's not free for enterprises then it's not open source, according to the commonly accepted definition.
> Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers.
That's capitalism, they need to quit their whining or move to North Korea. /s The whole point is to maximize value to the shareholders, and the more work they can shove onto unpaid volunteers, the move money they can shove into stock buybacks or dividends.
The system is broken. IMHO, there outta be a law mandating reasonable payments from multi-billion dollar companies to open source software maintainers.
Not too fond of maintainers getting too uppity about this stuff. I get that it can be frustrating to receive bug report after bug report from people who are unwilling or unable to contribute to the code base, or at the very least to donate to the team.
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
When you already work 40+ hours a week and big companies suddenly start an AI snowblower that shoots a dozen extra hours of work every week at you without doing anything to balance that (like, for instance, also opening PRs with patches that fix the bugs), the relationship starts feeling like being an unpaid employee of their project.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
The problem with security reports in general is security people are rampant self-promoters. (Linus once called them something worse.)
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Except that the only people publicizing this bug were the people running the ffmpeg Twitter account. Without them it would have been one of thousands of vulnerabilities reported with no fanfare, no logos, and no conference talks.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
How do they know that next week it's not going to be one of those 10 page Project Zero blog posts? (Which like all Google engineer blog posts, usually end up mostly being about how smart the person who wrote the blog post is.)
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
> it can be frustrating to receive bug report after bug report from people
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
Do you have evidence of ai slop, or are you just spreading fud? The linked bug was acknowledged as real.
That is completely irrelevant, the gross part is that (if true) they are demanding them to be fixed in a given time. Sounds like the epitome of entitlement to me, to say the least.
No one is demanding anything, the report itself is a 90 day grace period before being publicly published. If the issues are slop then what exactly is your complaint?
google literally tells them it's an ai generated report
That is not the definition of slop.
if it's unwanted then it is
and the ffmpeg maintainers say it's not wanted
so it's slop
It’s a reproducible use-after-free in a codec that ships by default with most desktop and server distributions. It can be leveraged in an exploit chain to compromise a system.
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
It’s not bug reports. It’s CVE.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
The lowered lead times are because devs have an entitled additude that others fix their code when they discover bugs in it.
The 90 day period is the grace period for the dev, not a demand. If they don't want to fix it then it goes public.
> The lowered lead times are because devs have an entitled additude that others fix their code when they discover bugs in it.
That’s how open source works.
It is super strange to say that who devoted their time and effort and then gives away their work for free is somehow entitled.
If this keeps up, there won't be anyone willing to maintain the software due to burn out.
In today's situation, free software is keeping many companies honest. Losing that kind of leverage would be a loss to the society overall.
And the public disclosure is going to hurt the users which could include defense, banks and other critical institutions.
CVEs aren't caused by bugs?
You could argue that, but I think that a bug is the software failing to do what it was specified, or what it promised to do. If security wasn't promised, it's not a bug.
Which is exactly the case here. This CVE is for a hobby codec written to support digital preservation of a some obscure video files from the 90’s that are used nowhere else. No security was promised.
They are not published in project bug trackers and are managed completely differently so no, personally, I don't view CVE as bug reports. Also, please, don't distrort what I say and omit part of my comment, thank you.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
It seems like you might misunderstand what CVEs are? They're just identifiers.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
I'm not misunderstanding anything. CVE involves a third party and it's not just a number. It's a number and an evaluation of severity.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.