Why does it matter? I know the answer and this is a philosophical complaint, but the purpose of CVE is simply to make sure that people are talking about the same bug, not as a certification of importance or impact.
In this particular case, the poster is complaining that 3 CVEs were assigned for memory corruption vulnerabilities reachable only from the dnsmasq configuration file. I didn't read carefully, but the presumption that config file memory corruption bugs aren't vulnerabilities is problematic, because user input can find its way into configurations through templating; it depends on how innocuous the field triggering the bug is.
I've had to generate "bill of materials" for software I've shipped, and often certain end users will beat you over the head for "vulnerabilities" even if they're a low CVSS score or do not apply to your own code. I get the resistance to wanting CVEs for everything, as regardless of the initial intentions, there's a LOT of people/enterprises that just see "oh shit there's a CVE, the whole thing is garbage, we're not going to accept this/pay you/etc." Basically CVEs are often weaponized in a really counterproductive way.
Yup, and people get real stupid with it too. I’ve seen people request an update to fix redos vulnerabilities in a go package using the stdlib only. Because some time some where a bot flagged the regex and a CVE was opened with no consideration that it was nonsensical.
You explain that the CVE makes no sense, and you’re met with the response that “ok but did when”
> Basically CVEs are often weaponized in a really counterproductive way.
This is inevitable when you boil everything down to a number. When that number refers to a (potentially) costly bug, people shirk critical thinking and just go straight for zero-tolerance.
Not ideal but I'm not sure if there's a better way :/
I suspect the big problem here is thinly-stretched volunteer maintainers.
I am very sympathetic to the idea that all memory corruption bugs should be fixed systematically, whether or not they're exploitable. It works well for OpenBSD. And, well, I wouldn't have leaned into Rust so early if I wasn't a bit fanatic about fixing memory corruption bugs.
But at the same time, a lot of maintainers are stretched really thin. And many pieces of software choose to trust some inputs, especially inputs that require root access to edit. If you want to take user input and use it to generate config files in /etc, you should plan to do extremely robust sanitization. Or to make donations to thinly-stretched volunteer maintainers, perhaps.
Is that not a problem with how people are using CVEs, scoring them and attaching value to them rather than whether a CVE should be assigned itself. A CVE is simply a number and some data on a vulnerability so that the community knows they are all talking about the same issue
Even if you need to be root to edit the files, it still is a deviation from the design or reasonably expected behaviour of that interface, so is still a bug and should still get a CVE. It should either be fixed or failing that documented as 'wont fix' and on the radar of anyone building an application. Someone building the next plesk or cpanel or similar management system should at least know about filtering their input and not allowing it to get to the dangerous config file.
Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced? Anyone ignoring that without question and using it as evidence that the project is bad without proof is putting way too much value in CVEs and the fault is their own
It's a bug, sure. The V in CVE is for "vulnerability", which is why people treat CVEs as more than just bugs.
If every bug got a CVE, practically every commit would get one and they'd be even less useful than they are now.
At that point, why not just use commit hashes for CVEs and get rid of the system entirely if we're going to say every bug should get a CVE?
> Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced?
If your suggested response to a human DoS is "why can't the humans just do more work and write more difficult-to-word-correctly communication", then you're not understanding the problem.
If you are wasting time wording communication then are you doing it wrong?
I imagine the response would be looking at it briefly, seeing if it looks dangerous or reproducible and getting an AI to return a templated "PoC or GTFO" response.
The mere existence of a CVE doesn't tell anyone whether a bug is valid or not, and the security reports should be handled in the same way regardless of whether one does exist. For some odd reason people have attached value to having your name logged beside CVEs, despite it not telling you anything,
"human communication is easy, just have an AI say 'buzz off' and the conversation partner and other strangers will always respond respectfully, I don't know why so many people complain about lack of spoons or other social issues".
Thanks doctor, you just solved my anxiety.
I broadly agree that having templates does lower the amount of human effort and emotional labor required, but trust me, it's not a silver bullet, even hitting someone with a template takes spoons.
I don't really care that CVEs in theory are apparently entirely without meaning and created for nonexistent bugs, we're talking about the reality of how they're perceived and used.
Like, I'm saying "Issuing garbage such that 100 people have to read it and then figure out what to do is bad, we should instead have a higher bar for the initial issuing part so 1 or 2 people have to actually read it, and 100 people can save some time. We should call out issuing garbage as bad behavior to hopefully reduce it in the future".
You're apparently disagreeing with that and saying "But reading is easy, and the thing is meaningless anyway so this real harm that actually happens is totally fine. We should keep issuing as much garbage as we can, the numbers don't mean anything. It's better to make a pile of garbage and stress the entire system such that no one values or trusts it than to add any amount of vetting or criticism over creating garbage"
idk, I guess we're probably actually on the same page and you're just arguing for arguing's sake because you think you can be a pedant and be technically correct about CVEs.
Tell me if I got a wrong read there and you have a more concrete point I'm missing?
But that's not what happened here. These are memory corruption bugs. Probably not meaningful ones, but in the subset of bugs that are generally considered vulnerabilities.
It's more complicated than that though. For security, the whole context has to be considered.
Like for example, look at the linked CVE-2025-12200, "NULL pointer dereference parsing config file"...
Please, explain a single dnsmasq setup where someone is somehow constructing a config file such that it both takes in untrusted input where this NPE is the difference between it being secure and being DoSd or insecure somehow, if you can even conjure up a plausible hypothetical way this could happen, I'd love to hear it, because this just seems so impossible to me.
This seems firmly in the realm of issuing CVEs for "post quantum crypto may not be safe from unknown alien attacks"
> Is that not a problem with how people are using CVEs, scoring them and attaching value to them
Well, yes, it is. But if that's the way the market is going to game the scoring/value system it's (mis)using, then it behooves a project that wants to be successful to play the same game and push back when the scoring unfairly penalizes it.
Basically dnsmasq doesn't really have much of a choice here. Someone found a config parser bug and tried to make a big deal out of it, so someone else (which has to be dnsmasq or a defender) needs to explain why it's not a big deal.
Some product decides not to use it. Someone loses a contract supporting it. Someone doesn't get a job because their work isn't favored anymore.
I think you're trying to invoke a frame where because dnsmasq is "open source" that it isn't subject to market forces or doesn't define value in a market-sensitive way. And... it is, and it does.
Free software hippies may be communists at heart but they still need to win on a capitalist battlefield.
Imagine a router has a web/cli interface for setting the DHCP server’s domain name. At some point the users’s data is forwarded to a process exiting the root-owned file.
Hypothetically, If a vulnerability in the parsing of such from the config could be exploited from the end-user, that would certainly matter.
And these things always seem to be one step away from bugs that allow arbitrary injection into the config file…
(I’m amazed at the hot messes exposed with HTTP and SMTP regarding difference in CR/CRLF/LF handling. Proxy servers and even “git” keep screwing this up…)
If someone can template in data, it's a lot easier to just set "dhcp-script=/arbitrary/code"
If the person templating isn't validating data, then it's already RCE to let someone template into this config file without careful validation.
... Also, this is a segfault, the chance anyone can get an RCE out of '*r = 0' for r being slightly out of bounds is close to nil, you'd need an actively malicious compiler.
While CVE's in theory are "just a number to coordinate with no real meaning", in practice a "Severity: High" CVE will trigger a bunch of work for people, so it's obviously not ideal to issue garbage ones.
> blindly take CVSS scoring as input without evaluating the vulnerability.
Evaluating the CVSS score in your own context is the work I'm talking about.
It does no one any good to have a CVE that says "may lead to remote code execution", when in fact it cannot, and if the reporter did more work, then you wouldn't need hundreds of people to independently do that work to determine this is garbage.
People being able to collectively analyze a vulnerability instead of having to all do it independently is pretty much the whole reason for having a CVE database, so I'm glad we agree.
I mean, I'm fine with the complaint about vulnerabilities that ambiguously refer to possible code execution, but that is a problem that long predates CVE.
Vulnerabilities can and often are chained together.
While the relevant configuration does require root to edit, that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
There are frivolous CVEs issued without any evidence of exploitability all the time. This particular example however, isn’t that. These are pretty clearly qualified as CVEs.
The implied risk is a different story, but if you’re familiar with the industry you’ll quickly learn that there are people with far more imagination and capacity to exploit conditions you believe aren’t practically exploitable, particularly in highly available tools such as dnsmasq. You don’t make assumptions about that. You publish the CVE.
>that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
The developer typically defines its threat model. My threat model would not include another application inserting garbage values into my application's config, which is expected to be configured by a root (trusted) user.
The Windows threat model does not include malicious hardware with DMA tampering with kernel memory _except_ maybe under very specific configurations.
The developer is too stupid to define the threat model — they’re too busy writing vulnerabilities as they cobble together applications and libraries they barely understand.
How many wireless routers generate a config from user data plus a template. One’s lucky if they even do server side validation that ensures CRLFs not present in IP addresses and hostnames.
And if Unicode is involved … a suitcase of four leaf clovers won’t save you.
> The developer typically defines its threat model.
The people running the software define the threat model.
And CNA’s issue CVEs because the developer isn’t the only one running their software, and it’s socially dangerous to allow that level of control of the narrative as it relates to security.
> The developer typically defines its threat model.
Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.
The developer puts their software into the world, but how the software is used in the world defines what risks exist.
How do CVEs get issued? Where do I apply, who makes decisions, and what software is covered by them?
I know these questions are technically answered out there on the internet. But I looked into it a couple of years ago after finding a horrible bug in a popular npm package and the answers weren't clear to me.
The first issue being raised is that replacing the configuration file shouldn't count as a vulnerability. Usually I'd agree, but the fact that it causes memory corruption from user input warrants at least a low severity report.
If we can't prove that a vulnerability is exploitable, we have to keep our assumptions minimal. If the memory corruption vuln is provably unexploitable, a future code change could surface it as a plausible exploit primitive. It can also point to a section of code that may have been under-speced, and may serve as an signal to pay more attention at these sections for related bugs. Also, it doesn't seem right to assume that the config files will always be under a privileged directory.
The second issue being discussed iun the mailing list is that it's LLM slop. While the reports do seem to be AI generated, I haven't seen any response about the PoC failing, but maybe there is a significant problem where a lot of PoCs are fake.
So many assumptions. As commander Data may have said today, "the most elementary and valuable statement in security, the beginning of wisdom, is 'I do not know.'"
Why does it matter? I know the answer and this is a philosophical complaint, but the purpose of CVE is simply to make sure that people are talking about the same bug, not as a certification of importance or impact.
In this particular case, the poster is complaining that 3 CVEs were assigned for memory corruption vulnerabilities reachable only from the dnsmasq configuration file. I didn't read carefully, but the presumption that config file memory corruption bugs aren't vulnerabilities is problematic, because user input can find its way into configurations through templating; it depends on how innocuous the field triggering the bug is.
I've had to generate "bill of materials" for software I've shipped, and often certain end users will beat you over the head for "vulnerabilities" even if they're a low CVSS score or do not apply to your own code. I get the resistance to wanting CVEs for everything, as regardless of the initial intentions, there's a LOT of people/enterprises that just see "oh shit there's a CVE, the whole thing is garbage, we're not going to accept this/pay you/etc." Basically CVEs are often weaponized in a really counterproductive way.
Yup, and people get real stupid with it too. I’ve seen people request an update to fix redos vulnerabilities in a go package using the stdlib only. Because some time some where a bot flagged the regex and a CVE was opened with no consideration that it was nonsensical.
You explain that the CVE makes no sense, and you’re met with the response that “ok but did when”
> Basically CVEs are often weaponized in a really counterproductive way.
This is inevitable when you boil everything down to a number. When that number refers to a (potentially) costly bug, people shirk critical thinking and just go straight for zero-tolerance.
Not ideal but I'm not sure if there's a better way :/
Ironically, software without a long list of CVEs is often the real hot garbage.
Some of it is surprisingly well known by name too!
If you do everything yourself you will avoid a lot of CVEs... for the time being.
Or get big enough, join the CVE board and just make the rules such that you can hide them forever
I suspect the big problem here is thinly-stretched volunteer maintainers.
I am very sympathetic to the idea that all memory corruption bugs should be fixed systematically, whether or not they're exploitable. It works well for OpenBSD. And, well, I wouldn't have leaned into Rust so early if I wasn't a bit fanatic about fixing memory corruption bugs.
But at the same time, a lot of maintainers are stretched really thin. And many pieces of software choose to trust some inputs, especially inputs that require root access to edit. If you want to take user input and use it to generate config files in /etc, you should plan to do extremely robust sanitization. Or to make donations to thinly-stretched volunteer maintainers, perhaps.
CVEs, however, do get scored according to CVSS, and they are often extremely hostile and live in fantasy land.
CVEs also cannot be denied by projects, and are often used as an avenue of harassment towards open source projects.
I agree with the poster on that mailing list, this is not, nor should be, a CVE. At no point can you edit those files without being root.
Is that not a problem with how people are using CVEs, scoring them and attaching value to them rather than whether a CVE should be assigned itself. A CVE is simply a number and some data on a vulnerability so that the community knows they are all talking about the same issue
Even if you need to be root to edit the files, it still is a deviation from the design or reasonably expected behaviour of that interface, so is still a bug and should still get a CVE. It should either be fixed or failing that documented as 'wont fix' and on the radar of anyone building an application. Someone building the next plesk or cpanel or similar management system should at least know about filtering their input and not allowing it to get to the dangerous config file.
Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced? Anyone ignoring that without question and using it as evidence that the project is bad without proof is putting way too much value in CVEs and the fault is their own
> so is still a bug and should still get a CVE
It's a bug, sure. The V in CVE is for "vulnerability", which is why people treat CVEs as more than just bugs.
If every bug got a CVE, practically every commit would get one and they'd be even less useful than they are now.
At that point, why not just use commit hashes for CVEs and get rid of the system entirely if we're going to say every bug should get a CVE?
> Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced?
If your suggested response to a human DoS is "why can't the humans just do more work and write more difficult-to-word-correctly communication", then you're not understanding the problem.
If you are wasting time wording communication then are you doing it wrong?
I imagine the response would be looking at it briefly, seeing if it looks dangerous or reproducible and getting an AI to return a templated "PoC or GTFO" response.
The mere existence of a CVE doesn't tell anyone whether a bug is valid or not, and the security reports should be handled in the same way regardless of whether one does exist. For some odd reason people have attached value to having your name logged beside CVEs, despite it not telling you anything,
"human communication is easy, just have an AI say 'buzz off' and the conversation partner and other strangers will always respond respectfully, I don't know why so many people complain about lack of spoons or other social issues".
Thanks doctor, you just solved my anxiety.
I broadly agree that having templates does lower the amount of human effort and emotional labor required, but trust me, it's not a silver bullet, even hitting someone with a template takes spoons.
I don't really care that CVEs in theory are apparently entirely without meaning and created for nonexistent bugs, we're talking about the reality of how they're perceived and used.
Like, I'm saying "Issuing garbage such that 100 people have to read it and then figure out what to do is bad, we should instead have a higher bar for the initial issuing part so 1 or 2 people have to actually read it, and 100 people can save some time. We should call out issuing garbage as bad behavior to hopefully reduce it in the future".
You're apparently disagreeing with that and saying "But reading is easy, and the thing is meaningless anyway so this real harm that actually happens is totally fine. We should keep issuing as much garbage as we can, the numbers don't mean anything. It's better to make a pile of garbage and stress the entire system such that no one values or trusts it than to add any amount of vetting or criticism over creating garbage"
idk, I guess we're probably actually on the same page and you're just arguing for arguing's sake because you think you can be a pedant and be technically correct about CVEs. Tell me if I got a wrong read there and you have a more concrete point I'm missing?
But that's not what happened here. These are memory corruption bugs. Probably not meaningful ones, but in the subset of bugs that are generally considered vulnerabilities.
It's more complicated than that though. For security, the whole context has to be considered.
Like for example, look at the linked CVE-2025-12200, "NULL pointer dereference parsing config file"...
Please, explain a single dnsmasq setup where someone is somehow constructing a config file such that it both takes in untrusted input where this NPE is the difference between it being secure and being DoSd or insecure somehow, if you can even conjure up a plausible hypothetical way this could happen, I'd love to hear it, because this just seems so impossible to me.
This seems firmly in the realm of issuing CVEs for "post quantum crypto may not be safe from unknown alien attacks"
CVE-2025-1312 bash and sudo privilege escalation
sudo may be exploited to obtain full root privilege when the shell receives attacker-controlled input
to reproduce: execute this shell script and authorize sudo when prompted
> Is that not a problem with how people are using CVEs, scoring them and attaching value to them
Well, yes, it is. But if that's the way the market is going to game the scoring/value system it's (mis)using, then it behooves a project that wants to be successful to play the same game and push back when the scoring unfairly penalizes it.
Basically dnsmasq doesn't really have much of a choice here. Someone found a config parser bug and tried to make a big deal out of it, so someone else (which has to be dnsmasq or a defender) needs to explain why it's not a big deal.
Why?
What negative thing happens to the dnsmasq project if they just don’t argue about whether or not it’s a big deal.
Some product decides not to use it. Someone loses a contract supporting it. Someone doesn't get a job because their work isn't favored anymore.
I think you're trying to invoke a frame where because dnsmasq is "open source" that it isn't subject to market forces or doesn't define value in a market-sensitive way. And... it is, and it does.
Free software hippies may be communists at heart but they still need to win on a capitalist battlefield.
It gets blurry at times though.
Imagine a router has a web/cli interface for setting the DHCP server’s domain name. At some point the users’s data is forwarded to a process exiting the root-owned file.
Hypothetically, If a vulnerability in the parsing of such from the config could be exploited from the end-user, that would certainly matter.
And these things always seem to be one step away from bugs that allow arbitrary injection into the config file…
(I’m amazed at the hot messes exposed with HTTP and SMTP regarding difference in CR/CRLF/LF handling. Proxy servers and even “git” keep screwing this up…)
Why stop there? Imagine a situation where the user is allowed to patch the binary.
If someone can template in data, it's a lot easier to just set "dhcp-script=/arbitrary/code"
If the person templating isn't validating data, then it's already RCE to let someone template into this config file without careful validation.
... Also, this is a segfault, the chance anyone can get an RCE out of '*r = 0' for r being slightly out of bounds is close to nil, you'd need an actively malicious compiler.
While CVE's in theory are "just a number to coordinate with no real meaning", in practice a "Severity: High" CVE will trigger a bunch of work for people, so it's obviously not ideal to issue garbage ones.
Maybe we should issue a CVE for company vulnerability response processes that blindly take CVSS scoring as input without evaluating the vulnerability.
> blindly take CVSS scoring as input without evaluating the vulnerability.
Evaluating the CVSS score in your own context is the work I'm talking about.
It does no one any good to have a CVE that says "may lead to remote code execution", when in fact it cannot, and if the reporter did more work, then you wouldn't need hundreds of people to independently do that work to determine this is garbage.
People being able to collectively analyze a vulnerability instead of having to all do it independently is pretty much the whole reason for having a CVE database, so I'm glad we agree.
I mean, I'm fine with the complaint about vulnerabilities that ambiguously refer to possible code execution, but that is a problem that long predates CVE.
Like I said, it depends on the configuration field. But people saying "you have to be root to change this configuration" are missing the point.
If the argument is "CVSS is a complete joke", I think basically every serious practitioner in the field agrees with that.
Vulnerabilities can and often are chained together.
While the relevant configuration does require root to edit, that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
There are frivolous CVEs issued without any evidence of exploitability all the time. This particular example however, isn’t that. These are pretty clearly qualified as CVEs.
The implied risk is a different story, but if you’re familiar with the industry you’ll quickly learn that there are people with far more imagination and capacity to exploit conditions you believe aren’t practically exploitable, particularly in highly available tools such as dnsmasq. You don’t make assumptions about that. You publish the CVE.
>that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
The developer typically defines its threat model. My threat model would not include another application inserting garbage values into my application's config, which is expected to be configured by a root (trusted) user.
The Windows threat model does not include malicious hardware with DMA tampering with kernel memory _except_ maybe under very specific configurations.
The developer is too stupid to define the threat model — they’re too busy writing vulnerabilities as they cobble together applications and libraries they barely understand.
How many wireless routers generate a config from user data plus a template. One’s lucky if they even do server side validation that ensures CRLFs not present in IP addresses and hostnames.
And if Unicode is involved … a suitcase of four leaf clovers won’t save you.
> The developer typically defines its threat model.
The people running the software define the threat model.
And CNA’s issue CVEs because the developer isn’t the only one running their software, and it’s socially dangerous to allow that level of control of the narrative as it relates to security.
> The developer typically defines its threat model.
Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.
The developer puts their software into the world, but how the software is used in the world defines what risks exist.
Why go through the trouble of exploring such a bug, when you have the ability to just replace the binary with something with a backdoor?
How do CVEs get issued? Where do I apply, who makes decisions, and what software is covered by them?
I know these questions are technically answered out there on the internet. But I looked into it a couple of years ago after finding a horrible bug in a popular npm package and the answers weren't clear to me.
Can a CVE be issued in retrospect?
> How do CVEs get issued? Where do I apply, who makes decisions
For most (but certainly not all) projects, you fill out a simple form [0]. I've done it before and it's fairly easy.
> and what software is covered by them?
All software is covered by someone, usually by the vendor themselves or MITRE.
> Can a CVE be issued in retrospect?
Absolutely, but it's fairly uncommon.
[0]: https://cveform.mitre.org/
Several issues seem to be getting mixed up.
The first issue being raised is that replacing the configuration file shouldn't count as a vulnerability. Usually I'd agree, but the fact that it causes memory corruption from user input warrants at least a low severity report.
If we can't prove that a vulnerability is exploitable, we have to keep our assumptions minimal. If the memory corruption vuln is provably unexploitable, a future code change could surface it as a plausible exploit primitive. It can also point to a section of code that may have been under-speced, and may serve as an signal to pay more attention at these sections for related bugs. Also, it doesn't seem right to assume that the config files will always be under a privileged directory.
The second issue being discussed iun the mailing list is that it's LLM slop. While the reports do seem to be AI generated, I haven't seen any response about the PoC failing, but maybe there is a significant problem where a lot of PoCs are fake.
So many assumptions. As commander Data may have said today, "the most elementary and valuable statement in security, the beginning of wisdom, is 'I do not know.'"