Any time someone asks about certificate validation errors on StackOverflow, half of the answers show how to disable validation rather than fix the issue. The API calls should be explicit, e.g., youWillBeFiredForFacilitatingManInTheMiddleAttacks().
Or it should be easier to supply an expected certificate
Nearly all the time, the tool doesn't accept the certificate format or it wants a chain instead of just the root because the other side doesn't supply a chain or the CA bundle doesn't match the CA you used or it doesn't use the CA system at all or the fingerprint format is the wrong hash or it wants a file instead of just a command-line fingerprint or there isn't an "at least do TOFU" flag so for testing you resort to "okay then just accept everything"... it's very rarely smooth sailing from the point of "okay I'm ssh'd into the server, now what do I run here to give this tool something it can use to verify the connection"
Makes me think of how hard PGP is considered to be. Perhaps key distribution in any asynchronous cryptographic system is simply hard
Key distribution and revocation is pretty much the hard problem, at least in pragmatic terms. The details of cryptographic operations in code get a lot of scrutiny, and even then there are issues. But key management combines crypto complexity with distributed system complexity, and mixes that with human propensity for operational error.
A large company I worked at a few years ago had an internal Python channel in Teams for coding support.
So many questions were about SSL issues, people would just ask how to disable errors/warnings from not having the correct certificate chain installed. It was insane how many "helpful" people would assist in turning them off instead of simply fixing the problem.
I started showing people the correct way to fix the issue and also created documentation to install the internal certificate server on our Ubuntu servers (I think they had it working on some of the RHEL machines). I was a contractor so I received an $80 bonus for my efforts.
You basically have two options to do it "correctly":
1) Walk a path of broken glass and razorblades, on your naked knees, through the depths of hell, trying to get a complex set of ancient tools and policies that no one truly understands to work together. One misstep, the whole thing seizes up, and good luck debugging or fixing it across organizational boundaries.
2) Throw in the towel and expose the insides of your org, and everyone you come into contact with, on the public Internet, so you can leverage "Internet-standard" tools and practices.
One of the fundamental issues is that doing SSL properly breaks a basic engineering assumption of locality/isolation. That is, if I'm making a tool that talks to another tool (that may or may not be made by me too) directly, I should only care about the two tools and the link between them. Not the goddamn public Internet. Alas, setting SSL means either entangling your tool with the corporate universe, or replicating a facsimile of the entire world locally, just so nothing in the stack starts whining about CAs, or that self-signed certs smell like poop, or something.
Like seriously. You make a dumb internal tool for yourself, with a web interface. You figure you want to do HTTPS because browsers whine (o. Apparently the correct way of doing this is... to buy a domain and get a cert from LetsEncrypt. WTF.
The whole philosophy around certificates is not designed to facilitate development. And guess what, I too sometimes get requests to give ability for a tool to skip some checks to make product testing possible, and it turns out that the whole communication stack already has flags for exactly that, for exactly that reason.
EDIT:
Imagine an arm broke off your coat hanger. You figure you'll take a metal bracket and two screws and fix it right there. But as you try, your power drill refuses to work and flashes some error about "insecure environment". You go on-line, and everyone tells you you need to go to the city council and register the drill and the coat hanger on a free Let's Construct build permit.
(Certs configured that way are technically incomplete, but because other browsers etc. handle it, it's now a "python breaks for certificates that work for other pieces of software" situation)
This discussion is just "do it because some browsers do it" without any reasoning why (or why not) you should do it. Firefox approach is i guess the best compromise between user annoyance and developer annoyance but it's still a compromise against proper TLS.
You'd be surprised how many companies with insanely valuable IP (especially in the startup space) who do not use vaults/secret managers and store keys in plain text files. Its pretty astonishing tbh.
Even at large companies. Secrets management was not even being done across large swaths of FAANG companies until ~2020. I know some people that made a very lucrative career out of enabling secrets at these orgs from 2010-2020.
Your view is probably skewed because you were the expert but I can assure you that fixing certificate issues is not a simple process for the vast majority of us, especially 15 years ago.
See the sibling comment by lucb1e for a description of what the typical experience is like when trying to solve such issue.
The amount of times I have to make this comment on code reviews or undo the madness and just add the certificate to the script/container and enable validation is insane.
In qBittorrent, the DownloadManager class has ignored every SSL certificate validation error that has ever happened, on every platform, for 14 years and 6 months since April 6 2010 with commit 9824d86.
As the commit message was "Fix HTTPS protocol support in torrent/rss downloader" I suppose it was a quick fix to make things work, and as things worked no one ever took a look at it until now.
EDIT:
The author of the PR[0] (who is one of the top qBittorrent contributors according to GitHub[1]) that fixed this also came to this conclusion:
> I presume that it was a quick'n'dirty way to get SSL going which persisted to this day. It's also possible that back in the day Qt4 (?) didn't support autoloading ca root certificates from the OS's store.
To be fair, this function ignoreSslErrors is not from the authors of qBittorrent, it comes from QT framework. The idea behind the function is that you provide it a small whitelist of errors you wish to ignore, for example in a Dev build you may well want to ignore self-signed errors for your Dev environment. The trouble is, you can call it with no arguments and this means you will ignore every error. This may have been misunderstood by the qBittorrent maintainers, maybe not.
Much more likely is that someone knew they had implemented this temporary solution while they implemented OpenSSL in a project which previously never had SSL support - a major change with a lot of work involved - and every programmer knows that there is nothing more permanent than a temporary solution. Especially in this case. I can understand how such code would make it into the repo(I think you do too), and it's very easy for us to say we would then have immediately amended it in the next version to properly verify certs.
Having been in contact with the maintainers, I have to say I was disappointed in how seriously they took the issue. I don't want to say any more than that.
Temporary solutions can become more dangerous with time. Years ago, in one of our projects, someone wrote a small helper class, HTTPClient, to talk to one of our internal subsystems. The subsystem in the dev environment used self-signed certificates, so one of the devs just disabled SSL validation. Whether SSL errors were ignored or not was specified in a config. Later, someone messed up while editing the configs, and SSL validation got disabled in the live environment, too. No one noticed, because nobody writes tests to check if SSL validation is enabled. But that's only part of the story, this HTTPClient class was still only used to communicate with our internal subsystem on our own network.
The real problem came later when the next generation of developers saw this HTTPClient class and thought, "Hey, what a nifty little helper!", and soon they were using it to talk to pretty much everything, including financial systems. I was shocked when I discovered it. An inconsequential temporary workaround had turned into a huge security hole.
In total it was about 45 days or so from the initial conversation. I waited for a patched version to be released, because the next important milestone after that would be finished backports to older versions still in use, which is clearly going to take a long time as it is not being prioritized, so I wanted to inform users.
Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.
Warning shots across the bow in private are the polite and responsible way, but malicious actors don't typically extend such courtesies to their victims.
As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.
On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.
This is a pretty dangerous take. The reality is that the vast majority of security vulnerabilities in software are not actively exploited, beause no one knows about them. Unless you have proof of active exploitation, you are much more likely to hurt users by publicly disclosing a 0-day than by responsibly disclosing it to the developer and giving them a reasonable amount of time to come out with a patch. Even if the developers are acting badly. Making a vulnerability public is putting a target on every user, not on the developer.
As opposed to the “security” of closed source software? Where severe vulns are left in as long as they aren't publicized because it would take too much development time to justify fixing and the company doesn't make money fixing vulns - it makes money creating new features. And since it isn't a security-related product any lapses in security are an "Oopsy woopsy we screwed up" and everyone moves on with their lives?
Even companies that are supposed to get security right have constant screw ups that are only fixed when someone goes poking around where they probably shouldn't and thankfully happens to not be malicious.
I think your comment works as a reply to claiming closed source is more secure than open source - you try to bring them both to the same level.
I dont think it replies to what the user asks though. It seems reasonable expecting widely used open source software to be studied by many people. If thats true it would be good to question why this wasnt caught by anyone. Ignoring all ssl errors is not something you need to be an expert to know is bad...
It would be incredible to learn how many have actually been affected by this issue in that past ~15 years... how important is SSL validation to those able to blend in with the crowd even on the sketchy-ish side of the internet?
So much "just works" because no one is paying attention. Of course now that the spotlight is on the issue it's all downhill from here for anyone who doesn't [auto-]update.
Probably zero? That thing was reposible for downloading python from python.org. It's possible to exploit, but would need to be pretty targeted and would require already some access to the target[1].
[1]: Because only other way to exploit it would be noticed by everyone else. Like python.org domain would need to be hijacked or something similar.
I think torrenting is one of those things that people understand is sketchy without it actually being sketchy. People also don't just leave it open forever, there usually leeching or seeding and then close the program when it's done. You're probably more likely to get a virus from the pirates exe. (Save me the reply that explains you can use torrenting legally, I already know.)
This seems a little overblown, especially towards the later points.
> 1. Malicious Executable loader with stealth functionality
TL;DR the client downloads Python from python.org over HTTPS. This isn't great (especially since it's hard-coded to 3.12.4), but there's no obvious exploit path which doesn't involve both MITM and user interaction.
TL;DR the client downloads an RSS file over HTTPS and will conditionally prompt the user to open a URL found in that file. This is even lower risk than #1; even if you can MITM the user and get them to click "update", all you get to do with that is show the user a web page.
> 3. RSS Feeds (Arbitrary URL injection)
The researcher seems confused by the expected behavior of an RSS client.
If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.
Those are minor if certificates errors are not ignored.
Since the original issue is that the ssl errors are ignored, then all those https downloads are downgraded to http downloads in practice (no need to mitm to attack).
Or to say it another way, due to ignoring ssl errors, all those https urls were giving a wrong sense of security as reviewers would think them secure when they were not (due to lack of validation of ssl).
> If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.
Any (e.g. http) server supporting stream compression comes to mind.
Or, on the client side, any software that uses libpng to render PNG images (since that's using deflate on the inside). There's probably even more direct exploits against qbittorrent than MITMing the GeoIP database download.
For compiling and running the latest version, https://github.com/userdocs/qbittorrent-nox-static is a nice helper script to build a static binary using Docker - I wanted to run 5.0.0 using libtorrent 1.2, and found the script by far the easiest way.
I've used deluge for longer than I've used almost any other program, I think. I've been pretty happy with their track record (from the perspective of... I've never seen a private tracker ban specific versions of deluge or anything to that effect. Which they've done for many other clients when big vulns drop for them.)
They failed to produce a build for Windows[1] for years after the official release of v2. They still don't have an official build for MacOS. They say to "check sticky topics on the forum"[2]. Saying that builds exist for those platforms on the homepage still seems a bit disingenuous.
I found the deluge (web?) ui becoming unusable after adding tens (or hundreds?) of thousands of torrents.
Not sure about the details, but a decade ago I used to seed all files below 100MB on many private trackers for seed bonus points, and yea, deluge ui (might have been the web ui, not sure) became very slow. :D
Same, deluge and qbittorrent would start to have issues with very large or lots of torrents. Ended up with transmission with the trguiNG UI and its handled everything. It's not perfect and often slow but it hasn't crashed.
I ran into slowdowns in the remote control after just a few hundred. I switched to transmission shortly after. I had a great time using Deluge for probably like 6-7 years but Transmission is more performant has more tooling support.
Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.
Syncthing does this too (though presumably with a certificate check). Automatic unattended autoupdate is logically indistinguishable from a RAT/trojan.
> Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.
I have to disagree here, the vulnerability part is that it can be exploited by a third party. Auto-update itself isn’t really an RCE vulnerability because the party you get the software from has to be trusted anyways.
Any time someone asks about certificate validation errors on StackOverflow, half of the answers show how to disable validation rather than fix the issue. The API calls should be explicit, e.g., youWillBeFiredForFacilitatingManInTheMiddleAttacks().
Or it should be easier to supply an expected certificate
Nearly all the time, the tool doesn't accept the certificate format or it wants a chain instead of just the root because the other side doesn't supply a chain or the CA bundle doesn't match the CA you used or it doesn't use the CA system at all or the fingerprint format is the wrong hash or it wants a file instead of just a command-line fingerprint or there isn't an "at least do TOFU" flag so for testing you resort to "okay then just accept everything"... it's very rarely smooth sailing from the point of "okay I'm ssh'd into the server, now what do I run here to give this tool something it can use to verify the connection"
Makes me think of how hard PGP is considered to be. Perhaps key distribution in any asynchronous cryptographic system is simply hard
Key distribution and revocation is pretty much the hard problem, at least in pragmatic terms. The details of cryptographic operations in code get a lot of scrutiny, and even then there are issues. But key management combines crypto complexity with distributed system complexity, and mixes that with human propensity for operational error.
> Makes me think of how hard PGP is considered to be
https://www.usenix.org/system/files/1401_08-12_mickens.pdf
Yeah the fact that on Linux the certificate bundle can be in literally 10 different locations depending on the distro is pretty embarrassing too.
A large company I worked at a few years ago had an internal Python channel in Teams for coding support.
So many questions were about SSL issues, people would just ask how to disable errors/warnings from not having the correct certificate chain installed. It was insane how many "helpful" people would assist in turning them off instead of simply fixing the problem.
I started showing people the correct way to fix the issue and also created documentation to install the internal certificate server on our Ubuntu servers (I think they had it working on some of the RHEL machines). I was a contractor so I received an $80 bonus for my efforts.
> instead of simply fixing the problem.
No such thing when certificates are involved.
You basically have two options to do it "correctly":
1) Walk a path of broken glass and razorblades, on your naked knees, through the depths of hell, trying to get a complex set of ancient tools and policies that no one truly understands to work together. One misstep, the whole thing seizes up, and good luck debugging or fixing it across organizational boundaries.
2) Throw in the towel and expose the insides of your org, and everyone you come into contact with, on the public Internet, so you can leverage "Internet-standard" tools and practices.
One of the fundamental issues is that doing SSL properly breaks a basic engineering assumption of locality/isolation. That is, if I'm making a tool that talks to another tool (that may or may not be made by me too) directly, I should only care about the two tools and the link between them. Not the goddamn public Internet. Alas, setting SSL means either entangling your tool with the corporate universe, or replicating a facsimile of the entire world locally, just so nothing in the stack starts whining about CAs, or that self-signed certs smell like poop, or something.
Like seriously. You make a dumb internal tool for yourself, with a web interface. You figure you want to do HTTPS because browsers whine (o. Apparently the correct way of doing this is... to buy a domain and get a cert from LetsEncrypt. WTF.
The whole philosophy around certificates is not designed to facilitate development. And guess what, I too sometimes get requests to give ability for a tool to skip some checks to make product testing possible, and it turns out that the whole communication stack already has flags for exactly that, for exactly that reason.
EDIT:
Imagine an arm broke off your coat hanger. You figure you'll take a metal bracket and two screws and fix it right there. But as you try, your power drill refuses to work and flashes some error about "insecure environment". You go on-line, and everyone tells you you need to go to the city council and register the drill and the coat hanger on a free Let's Construct build permit.
This is how dealing with SSL "correctly" feels.
> Python channel in Teams for coding support. So many questions were about SSL issues
I learned the other day that Python doesn't support AIA chasing natively.
https://bugs.python.org/issue18617
(Certs configured that way are technically incomplete, but because other browsers etc. handle it, it's now a "python breaks for certificates that work for other pieces of software" situation)
The issue was migrated to github so more up-to-date discussion is in https://github.com/python/cpython/issues/62817
This discussion is just "do it because some browsers do it" without any reasoning why (or why not) you should do it. Firefox approach is i guess the best compromise between user annoyance and developer annoyance but it's still a compromise against proper TLS.
You'd be surprised how many companies with insanely valuable IP (especially in the startup space) who do not use vaults/secret managers and store keys in plain text files. Its pretty astonishing tbh.
Even at large companies. Secrets management was not even being done across large swaths of FAANG companies until ~2020. I know some people that made a very lucrative career out of enabling secrets at these orgs from 2010-2020.
> instead of simply fixing the problem.
Your view is probably skewed because you were the expert but I can assure you that fixing certificate issues is not a simple process for the vast majority of us, especially 15 years ago.
See the sibling comment by lucb1e for a description of what the typical experience is like when trying to solve such issue.
The amount of times I have to make this comment on code reviews or undo the madness and just add the certificate to the script/container and enable validation is insane.
Noteworthy that this wasn't a bug, but a "feature":
https://github.com/qbittorrent/qBittorrent/commit/9824d86a3c...Is the motivation behind this known?
As the commit message was "Fix HTTPS protocol support in torrent/rss downloader" I suppose it was a quick fix to make things work, and as things worked no one ever took a look at it until now.
EDIT: The author of the PR[0] (who is one of the top qBittorrent contributors according to GitHub[1]) that fixed this also came to this conclusion:
> I presume that it was a quick'n'dirty way to get SSL going which persisted to this day. It's also possible that back in the day Qt4 (?) didn't support autoloading ca root certificates from the OS's store.
[0]: https://github.com/qbittorrent/qBittorrent/pull/21364 [1]: https://github.com/qbittorrent/qBittorrent/graphs/contributo...
To be fair, this function ignoreSslErrors is not from the authors of qBittorrent, it comes from QT framework. The idea behind the function is that you provide it a small whitelist of errors you wish to ignore, for example in a Dev build you may well want to ignore self-signed errors for your Dev environment. The trouble is, you can call it with no arguments and this means you will ignore every error. This may have been misunderstood by the qBittorrent maintainers, maybe not.
Much more likely is that someone knew they had implemented this temporary solution while they implemented OpenSSL in a project which previously never had SSL support - a major change with a lot of work involved - and every programmer knows that there is nothing more permanent than a temporary solution. Especially in this case. I can understand how such code would make it into the repo(I think you do too), and it's very easy for us to say we would then have immediately amended it in the next version to properly verify certs.
Having been in contact with the maintainers, I have to say I was disappointed in how seriously they took the issue. I don't want to say any more than that.
Source: author of the article
Temporary solutions can become more dangerous with time. Years ago, in one of our projects, someone wrote a small helper class, HTTPClient, to talk to one of our internal subsystems. The subsystem in the dev environment used self-signed certificates, so one of the devs just disabled SSL validation. Whether SSL errors were ignored or not was specified in a config. Later, someone messed up while editing the configs, and SSL validation got disabled in the live environment, too. No one noticed, because nobody writes tests to check if SSL validation is enabled. But that's only part of the story, this HTTPClient class was still only used to communicate with our internal subsystem on our own network.
The real problem came later when the next generation of developers saw this HTTPClient class and thought, "Hey, what a nifty little helper!", and soon they were using it to talk to pretty much everything, including financial systems. I was shocked when I discovered it. An inconsequential temporary workaround had turned into a huge security hole.
How much notification did you give the developers before you disclosed? Did you enforce a timeline?
In total it was about 45 days or so from the initial conversation. I waited for a patched version to be released, because the next important milestone after that would be finished backports to older versions still in use, which is clearly going to take a long time as it is not being prioritized, so I wanted to inform users.
Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.
Warning shots across the bow in private are the polite and responsible way, but malicious actors don't typically extend such courtesies to their victims.
As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.
On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.
This is a pretty dangerous take. The reality is that the vast majority of security vulnerabilities in software are not actively exploited, beause no one knows about them. Unless you have proof of active exploitation, you are much more likely to hurt users by publicly disclosing a 0-day than by responsibly disclosing it to the developer and giving them a reasonable amount of time to come out with a patch. Even if the developers are acting badly. Making a vulnerability public is putting a target on every user, not on the developer.
Another point against the “security” of open source software.
“Oh, it’ll have millions of eyes on it”… except no one looks.
As opposed to the “security” of closed source software? Where severe vulns are left in as long as they aren't publicized because it would take too much development time to justify fixing and the company doesn't make money fixing vulns - it makes money creating new features. And since it isn't a security-related product any lapses in security are an "Oopsy woopsy we screwed up" and everyone moves on with their lives?
Even companies that are supposed to get security right have constant screw ups that are only fixed when someone goes poking around where they probably shouldn't and thankfully happens to not be malicious.
I think your comment works as a reply to claiming closed source is more secure than open source - you try to bring them both to the same level.
I dont think it replies to what the user asks though. It seems reasonable expecting widely used open source software to be studied by many people. If thats true it would be good to question why this wasnt caught by anyone. Ignoring all ssl errors is not something you need to be an expert to know is bad...
Except this was found eventually.
How many fifteen year old plus problems exist in closed source bases?
It would be incredible to learn how many have actually been affected by this issue in that past ~15 years... how important is SSL validation to those able to blend in with the crowd even on the sketchy-ish side of the internet?
So much "just works" because no one is paying attention. Of course now that the spotlight is on the issue it's all downhill from here for anyone who doesn't [auto-]update.
Probably zero? That thing was reposible for downloading python from python.org. It's possible to exploit, but would need to be pretty targeted and would require already some access to the target[1].
[1]: Because only other way to exploit it would be noticed by everyone else. Like python.org domain would need to be hijacked or something similar.
It makes a MITM attack possible, that doesn't require access to the target or the website it's contacting.
I'd still guess zero times though.
You don’t need to hijack the whole domain to poison DNS for a given client
No. The lack of certificate checking means anyone with access to the network in between; a rogue AP is sufficient.
The "some access to the target" bit could just being on the same unsecure wifi network as them, such as a coffee shop or library.
Still, I doubt anyone noticed this, and you'd also still need the victim to use qBittorrent and go through this flow that downloads python.
Zero seems pretty likely, yeah.
It would be incredible to learn how many have actually been affected by this issue in that past ~15 years
IMHO close to 0 --- and for those who were affected, it would've likely been a targeted attack.
I had the exact same thought. Actually having the data seems almost impossible, it sure would be fun to see.
I think torrenting is one of those things that people understand is sketchy without it actually being sketchy. People also don't just leave it open forever, there usually leeching or seeding and then close the program when it's done. You're probably more likely to get a virus from the pirates exe. (Save me the reply that explains you can use torrenting legally, I already know.)
This seems a little overblown, especially towards the later points.
> 1. Malicious Executable loader with stealth functionality
TL;DR the client downloads Python from python.org over HTTPS. This isn't great (especially since it's hard-coded to 3.12.4), but there's no obvious exploit path which doesn't involve both MITM and user interaction.
> 2. Browser Hijacking + Executable Download (Software Upgrade Context)
TL;DR the client downloads an RSS file over HTTPS and will conditionally prompt the user to open a URL found in that file. This is even lower risk than #1; even if you can MITM the user and get them to click "update", all you get to do with that is show the user a web page.
> 3. RSS Feeds (Arbitrary URL injection)
The researcher seems confused by the expected behavior of an RSS client.
> 4. Decompression library attack surface (0-click)
If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.
Those are minor if certificates errors are not ignored.
Since the original issue is that the ssl errors are ignored, then all those https downloads are downgraded to http downloads in practice (no need to mitm to attack).
Or to say it another way, due to ignoring ssl errors, all those https urls were giving a wrong sense of security as reviewers would think them secure when they were not (due to lack of validation of ssl).
> If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.
Any (e.g. http) server supporting stream compression comes to mind.
Or, on the client side, any software that uses libpng to render PNG images (since that's using deflate on the inside). There's probably even more direct exploits against qbittorrent than MITMing the GeoIP database download.
>BUGFIX: Don't ignore SSL errors (sledgehammer999)
>https://www.qbittorrent.org/news
There should be a security notice IMO.
For compiling and running the latest version, https://github.com/userdocs/qbittorrent-nox-static is a nice helper script to build a static binary using Docker - I wanted to run 5.0.0 using libtorrent 1.2, and found the script by far the easiest way.
*inserts backdoor*
What's considered the most secure Bittorrent app?
qBittorrent after the most recent update...
The one in a restricted container.
(if you have MITM)
I've used deluge for longer than I've used almost any other program, I think. I've been pretty happy with their track record (from the perspective of... I've never seen a private tracker ban specific versions of deluge or anything to that effect. Which they've done for many other clients when big vulns drop for them.)
They failed to produce a build for Windows[1] for years after the official release of v2. They still don't have an official build for MacOS. They say to "check sticky topics on the forum"[2]. Saying that builds exist for those platforms on the homepage still seems a bit disingenuous.
1. https://dev.deluge-torrent.org/ticket/3201
2. https://deluge.readthedocs.io/en/latest/intro/01-install.htm...
it's shocking how low-quality these issues are in a client that is otherwise 1000x more performant than the other options listed in the article
Deluge performs just as well as qBittorrent. libtorrent-rasterbar (libtorrent.org) is what is performant.
I found the deluge (web?) ui becoming unusable after adding tens (or hundreds?) of thousands of torrents.
Not sure about the details, but a decade ago I used to seed all files below 100MB on many private trackers for seed bonus points, and yea, deluge ui (might have been the web ui, not sure) became very slow. :D
Same, deluge and qbittorrent would start to have issues with very large or lots of torrents. Ended up with transmission with the trguiNG UI and its handled everything. It's not perfect and often slow but it hasn't crashed.
I ran into slowdowns in the remote control after just a few hundred. I switched to transmission shortly after. I had a great time using Deluge for probably like 6-7 years but Transmission is more performant has more tooling support.
Should have written it in Rust… oh wait, different issue
Nothing in Rust prevents you from turning off SSL validation
Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.
Syncthing does this too (though presumably with a certificate check). Automatic unattended autoupdate is logically indistinguishable from a RAT/trojan.
> Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.
I have to disagree here, the vulnerability part is that it can be exploited by a third party. Auto-update itself isn’t really an RCE vulnerability because the party you get the software from has to be trusted anyways.
> Automatic unattended autoupdate is logically indistinguishable from a RAT/trojan.
What about: the same people do the automatic unattended autoupdate that you downloaded the original program from, or not?
> Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.
It literally is not.
Thank you. Uninstalled.
If you knew how much of a common thing this is you'd probably just uninstall everything.