Would not surprise me. Although it looks like F-Droid is hosted with Hetzner, I have encountered more than one failure to rotate certificates on account of Linode API changes, requiring manual update of the Python Linode API client to resolve.
This should be an oxymoron. We've forgotten the point of an API as a profession and it's downright shameful when something this important breaks needlessly. Would it have been that hard to just keep supporting whatever API calls were in existence as e.g. "v1" and put their new stuff in "v2"?
It is, if your objective is to closely centralize the web. If you make https mandatory, via scare tactics, only people with certificates will have websites. If you make ephemeral certificates mandatory by taking advantage of a monopoly, then only big SSL providers who can afford it will survive.
Then, when you have only two or three big SSL providers, it's way easier to shut someone off by denying them a certificate, and see their site vanish in mere weeks.
- We went from the vast majority of traffic being unencrypted, allowing any passive attacker (from nation state to script kiddie sitting in the coffee shop) to snoop and any active attacker to trivially tamper with it, to all but a vanishing minority of connections being strongly encrypted. The scare tactics used to sell VPNs in YouTube ads used to all be true, and no longer are, due to this.
- We went from TLS certificates being unaffordable to hobbyists to TLS certificates being not only free, but trivial to automatically obtain.
- We went from a CA ecosystem where only commercial alternatives exist to one where the main CA is a nonprofit run by a foundation consisting mostly of strong proponents of Internet freedom.
- Even if you count ZeroSSL and Let's Encrypt as US-controlled, there is at least one free non-US alternative using the same protocol, i.e. suitable as a drop-in replacement (https://www.actalis.com/subscription).
- Plenty of other paid but affordable alternatives exist from countless countries, and the ecosystem seems to be getting better, not worse.
- While many other paths have been used to attempt to censor web sites, I haven't seen the certificate system used for this frequently (I'm sure there are individual court orders somewhere).
- If the US wanted to put its full weight behind getting a site off the Internet, it would have other levers that would be equally or more effective.
- Most Internet freedom advocates recognize that the migration to HTTPS was a really, really good thing.
> - We went from the vast majority of traffic being unencrypted, allowing any passive attacker (from nation state to script kiddie sitting in the coffee shop) to snoop and any active attacker to trivially tamper with it, to all but a vanishing minority of connections being strongly encrypted.
I still don't understand why this is so terrible.
Public wifi networks were certainly a real problem, but that's not where the majority of internet usage happens, and they could have been fixed on a different layer.
If you're on a traditional home internet connection, who exactly can tamper with your traffic? Your ISP can, and that's not great, but it doesn't strike me as blaring siren levels of terrible, either. Even with HTTPS, the companies behind my OS and web browser can still see everything I do, so in exchange for all this work we've removed maybe 1 out of 3 parties from the equation. And, personally, I trust the OS and browser vendors less than I trust my ISP!
Some progress is better than none, and it's still nice that my ISP can't tamper with my connection any more. Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing, and as I've said previously, I trust these parties comparatively less than my ISP.
> - We went from TLS certificates being unaffordable to hobbyists to TLS certificates being not only free, but trivial to automatically obtain.
Sure, but it's also trivial to just throw up a website on Github Pages, or forgo the website completely and use Instagram. TLS is "trivial" if you rely the infrastructure of a specific external party.
Please help me understand what I'm missing because I find this really frustrating!
> If you're on a traditional home internet connection, who exactly can tamper with your traffic? Your ISP can, and that's not great, but it doesn't strike me as blaring siren levels of terrible, either.
This characterization in on the same level of sophistication as "the Internet is just a series of pipes". Every transit station has the opportunity to read or even tamper with the bytes on an unencrypted http connection. That's not just your ISP, it also includes the ISP's backbone provider, the backbone peering provider, your country's Internet Exchange, the Internet Exchange in the country of the website, the website's peering partner, and the website's hosting partner.
Some of those parties may be the same, and some parties I have not mentioned for brevity. To take just one example: there is only one direct link between Europe and South America. Most traffic between those continents goes via Amsterdam (NL) and New Jersey (US) to Barranquilla (CO), or via Sines (PT) to Fortaleza (BR). Or if the packets are feeling adventurous today, they might go through Italy, Singapore, California and Chile, with optional transit layovers in Saudi Arabia, Pakistan, Thailand or China.
Main point being: as a user, you have no control over the routing of your Internet traffic. The traffic also doesn't follow geographic rules, they follow peering cost. You can't even be sure that traffic between you and a website in your country stays inside that country.
Thanks for this, I legitimately didn't realize every interlink in the entire chain has the ability to tamper with a connection. I'm still very concerned about the centralization of https but I understand the need somewhat more.
> Some progress is better than none, and it's still nice that my ISP can't snoop on me any more. Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing, and as I've said previously, i trust these parties comparatively less than my ISP.
It might be more correct to say that Certificate Pinning made it so you can't inspect your own traffic - for sites with TLS but without certificate pinning, you can just as easily create your own root certificate and force the browser and OS to trust the cert by installing it at the OS level. This is (part of, atleast) how tools like Fiddler and Charles Proxy allow you to inspect HTTPS traffic, the other part being a mitm proxy that replaces the server's actual cert with one the mitm proxy generates [0]
I've used mitm proxies, the problem is I don't know whether the software is behaving the same way under a proxy as it would normally.
Edit: To be clear, I'm not even suggesting the software would be doing this maliciously! Apps do all sorts of weird things when you try to proxy them, I know this because I do run most of my traffic through a proxy (for non-privacy reasons). Just for example, QUIC gets disabled.
If you're that worried about software being that devious, then you probably shouldn't be using that software at all, regardless of your ability to monitor its traffic.
> I still don't understand why this is so terrible.
While I don't really have a scary threat model, I don't love the idea that my ISP could have been watching my traffic. Maybe there's a world where my government has ordered ISPs to log specifics about traffic in order to trap dissidents doing things they don't like. But sure, I live in the US, which isn't (yet) an authoritarian nightmare (yet!). But maybe I live in Texas, and I'm searching for information about getting an abortion (illegal to have one there in most cases). Maybe I'm a schoolteacher in Florida, and I'm searching information on critical race theory (a topic banned from instruction in Florida schools). I want that traffic to be private.
> Even with HTTPS, the companies behind my OS and web browser can still see everything I do, so in exchange for all this work we've removed maybe 1 out of 3 parties from the equation
I mean, that's on you for using a proprietary OS owned by a for-profit corporation. I get that desktop Linux or a de-Googled Android phone isn't for everyone, but those are options you have, if you're really worried.
And there are quite a few major browsers that are open source, so even if you can't inspect their traffic at runtime, if you really are truly serious about this, you can audit their source code and do your own builds. Yes, I would consider that unnecessarily paranoid, but the option is there for you, and you can even run these browsers on proprietary OSes. And honestly, I assume you use Chrome anyway; if that's the case then you clearly are not serious about this if you're using a web browser made by an advertising company. (If you're using something else: awesome, and apologies for the bad assumption.)
> Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing
You can still do this, but it does require more work setting up your own CA and installing it as trusted in your own devices, and them MitM'ing your traffic at the router in order to present a cert from your CA before forwarding the connection on to the real site.
Yes, this is out of reach for the average home internet user, but if you are the kind of person who is thinking about doing traffic monitoring on your home network, then you have the skills to do this. Meanwhile, the other 99% of us get better privacy online; I think that's a perfectly fine trade off.
> and as I've said previously, I trust [my OS and browser vendor] comparatively less than my ISP.
My ISP is Comcast; even if my OS and browser vendor was Microsoft or Apple, I think I'd probably still trust Comcast less. Fortunately my OS and browser vendors are not Microsoft or Apple, so I don't have to worry about that, but still.
> Sure, but it's also trivial to just throw up a website on Github Pages, or forgo the website completely and use Instagram. TLS is "trivial" if you rely the infrastructure of a specific external party.
Running a website, even from your home internet connection, still means relying on the infrastructure of a third party. There's no way to get away from that.
And you still can run one without TLS. Browsers will still display unencrypted pages, though I'll admit that I'd be unsurprised if some future versions of major browsers stopped allowing that, or made it look scary to your average user.
> Please help me understand what I'm missing because I find this really frustrating!
I think what you are missing is that people actually do value connection encryption, for real reasons, not paranoid, tin-foil-hat reasons. And while you do present some valid downsides, we believe those downsides are overblown, or at the very least worth it in the trade off. It's fine for you to not agree with that trade off, which is a shame, but... that's life.
Yes, the trust model for TLS is broken and the handful of attempts made to fix it (Moxie's "Convergence" project from 2011[1], for instance) haven't born fruit.
However, in a security context "takes some effort" is far better than "takes no effort".
If CAA records (with DNSSEC) were used to reject certificates from the wrong issuer, we might even be able to get to "though very imperfect, takes a considerable amount of effort".
DANE is supposed to be the solution to this problem but it's absolutely awful to use and will lead to even more fragile infrastructure than we currently have with TLS certs (and also ultimately depends on DNSSEC). HPKP was the non-DNS solution but it was removed because it suffered from an even worse form of fragility that could lock out domains for years.
Trust isn't that centralized. Thanks to Certificate Transparency, a CA cannot issue a certificate that will be accepted by Chrome or Firefox without the site owner being able to detect that.
What is stupid is not to provide free certs for malicious web sites, what was stupid is telling people for years that they would be safe and could trust a website only because there was a lock icon on the url bar.
> Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
Do we have any statistics for how many people are actually doing this? Such warnings are so rare in my experience that, by default, I don't trust a site that has no SSL/expired or invalid certs and won't click through if I see that warning.
The only one of those things that is the fault of ACME is the first one, and are you really suggesting between that and your second bullet point that we should charge money for encryption so that people value it more? Encryption is free so people do it more. Paying money doesn't actually make people trustworthy. (Though you can totally charge people to prove they aren't malicious, but if you want to do that, why tie it to encryption? Encrypt regardless.)
> Paying money doesn't actually make people trustworthy.
This is fundamentally a naive understanding of both security and certificates. Paying money absolutely makes people trustworthy because it's prohibitive to do it at scale. You might have one paid malicious certificate but you can have thousands of free ones. The one malicious domain gets banned, the thousands are whack-a-mole forever.
Further, certificates used to indicate identity in more than a "the domain you are connected to" sense. There was a big PR campaign to wreck EV certs but EV certs generally were extremely secure. And even Google, who most loudly complained about EV, has reintroduced the Verified Mark Certificate (VMC) to replace it and use for new things like BIMI.
EV certs didn't actually afford the guarantees people hoped and expected. I could simply spend a few hundred dollars to register "Stripe, LLC" or "Microsoft, Inc." in my local jurisdiction, and then get an EV cert with that name on it.
Browser vendors removed the extra UI around EV certs not because certs in general are easier to get, but because the identity "guarantee" afforded to EV certs was fairly easy to spoof. EV certs still exist, and you can foolishly pay for one if you want. Free ACME-provided certs has nothing to do with this.
Again, this is an incredibly naive and uninformed take. Yes. You can spend hundreds of dollars to make one attempt at malicious activity, and yeah, that could also be fixed by tweaking EV requirements. (More than likely by putting a country flag on the EV banner.) One person as an example managing to get a problematic EV cert is not a sign of a broken system, it's a sign of a working system that only a few edge case examples exist.
Cybercriminals work at scale. The opinion you shared here is why Google, Microsoft, and Amazon are so easy to use for cybercrime. It's incredibly easy to hide bad behavior in cheap, disposable attempts on free accounts.
Cost virtually eliminates abuse. Bad actors are fronting effort and ideally small amounts of money to effectively bet on a high return. You make the cost to attempt high, it isn't worth it. Apart from some high profile blogs demonstrating the risk, EV certs have to my knowledge never been used maliciously, and hiding them from the browser bar just makes useful, high quality data about the trustworthiness of a site buried behind hidden menus.
It didn't take a "big PR campaign". EV is crap. It was crap when it was created, it's still crap now. It was tolerated because the commercial CAs wanted a new shinier product and in exchange we got the BRs and from there we got here. Don't lean on EV, it's superficial not load bearing.
I don't much care about BIMI. People keep trying to resuscitate that particular dead dog (email security), maybe one day they will succeed but I don't expect to be involved.
> We now provide a completely free certs for a malicious web-sites
Malicious websites never had a problem buying certs before. Sure, the bar is lower now, but I don't think it was a particularly meaningful bar before. Besides, the most common ways to get malicious websites shut down are to get their webhost to cut them off, or get a court order to seize their domain name. Getting their TLS cert revoked isn't common, and doesn't really do the job anyway.
> Degraded encryption value so much it's not even indicated anymore (remember the green bar for EV?)
No, we've degraded the identity verification afforded by EV and those former browser features. Remember that the promise of SSL/TLS was two things: 1) your traffic is private, 2) it verifies that the server you thought you were contacting is actually the one you reached.
I think (2) was always going to be difficult: either you make it hard and expensive to acquire TLS certificates, and (2) has value, or you don't, and it doesn't. I think pervasive encryption is way more important than site owner identity validation. And I don't think the value of an EV cert was even all that high back when browsers called them out in their UI. There are lots of examples of people trivially managing to get an EV cert from somewhere, with their locally-registered "Stripe, LLC" or whatever in the "validated" company name field of their cert.
> Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
Not sure what that has to do with this. That was more of a problem back when we didn't have Let's Encrypt, so lots of people were using self-signed certs, or let their certs expire and didn't fix it, or whatever. These days I expect certificate warnings are fairly rare, and so users might actually start paying attention to them again.
> SNI exists and even without it anything not on CDN is blocked very easily
ESNI also exists, and while not being available everywhere, it'll get there. But this is a bizarre complaint, as it's entirely trivial to block traffic when there's no TLS at all.
You don't need short expirations for that. CRLs/OCSP already provided a mechanism for certificates to be revoked before they expire.
However, short expirations severely limit the damage an attacker can do if they steal your private key.
And they avoid the situations where an organization simply forgets to renew a cert, because automating something so infrequent is genuinely difficult from an organizational standpoint. Employees leave, calendar reminders go missing, and yeah.
CRLs are becoming bulky, and OCSP have some privacy implications (telling the CA which websites you visit), plus most browsers are set to soft fail if there's an outage and the request can't be made instead of a hard fail and making the website inaccessible, reducing the security and usefulness of OCSP.
Short-lived certificates fixes these issues from an end-user standpoint.
Yup. If your primary goal was fast, efficient certificate revocation, then having certs that still take 90 days to expire rather than 2 years is not the solution you'd come up with.
CRL/OCSP had limited effect in practice. A revoked certificate would, if I remember correctly, continue to be accepted by many if not most browsers (by market share).
Great explanation and very apt for out time when we regularly hear of people being banned/debanked/jailed for their political views in western countries.
The web PKI is certainly a potential point of failure in online communications, but fortunately there is almost no history of certificate revocation over content disputes. The biggest targets have been domain name registrars and CDNs.
Let's Encrypt has emphasized that it doesn't have the resources to investigate content disputes (currently, it's issuing nearly 10 million certificates per day, with no human intervention for any of them) and that having to adjudicate who's entitled to have a certificate by non-automated criteria would throw the model of free-of-charge certificates into doubt.
Meanwhile, encrypting web traffic makes it harder for governments to know who is reading or saying what. (Not always impossible, just harder.) Without it, we could have phenomena like keyword searches over Internet traffic in order to instantly determine who's searching for or otherwise reading or writing specific terms!
I'm very aware that it's still easy to observe who visits a particular site (based on SNI, as someone else mentioned in this thread). But there's a chicken-and-egg problem for protecting that information, and encrypting the actual site traffic is at least the chicken, while the egg may be coming with ECH.
Overall, transit encryption is very good for free expression online, and people who want to undermine or limit online speech are much more likely to be trying to undermine encryption than to promote it.
The biggest thing that Let's Encrypt in particular does to mitigate the risk of being unable to serve particular subscribers is to ensure that ACME is an open protocol that can be implemented by different CAs, and that it's very easy for subscribers to switch CAs at any time for any reason. The certificate system is more centralized than many people involved with it would prefer, but at least it's avoiding vendor lock-in.
They are. Unbreakable crypto for free, your clients don't have to exchange keys with you in person, and the only cost is running a script on a server that has to run automated code all day anyway
Running basic automation to keep your certificates renewed is not difficult. I do this on my toy website and it’s been working without me touching it since before the pandemic.
DANE is not a good idea. It makes DNS the CA. DNS doesn't have any stringent security requirements to its design or operation, as CAs do. It depends on a problematic protocol (that, among other things, limits the ability to deal with different operational and failure modes). And just because a nameserver provides a record, doesn't mean an authorized domain owner wanted that record to be an authorized secure transport. (not to mention, it would force people choose domain TLDs based on political positions, rather than, say, a desire for an easy to remember name)
It's weak security and introduces more problems than it solves. If we're going to get rid of CAs, we should consider a better solution, not a worse one.
No, it doesn't. It makes the registrar the CA. Which makes sense, they already authorize who owns which domain. They should absolutely do so cryptographically in some fashion.
What's weird is that the major registrars never even tried to enter the PKI business. It would have made sense. It would even have hastened the adoption of much needed TLS extensions.
- A CA validates requests, signs CSRs, publishes cert revocation, issues certificates and trust anchors.
- A registrar in DANE merely passes a DS record you created to the TLD, along with the promise that this record was created by the domain zone owner. It's basically the validation step. Nothing to do with establishing or securing data, key/record management, etc; they're a glorified FTP tool.
I'm in favor of registrars getting more involved (since they are the authority on who controls a domain), but only with a completely different design. I have suggested many times that CAs establish an API to communicate directly with Registrars to perform the validation step, as this would eliminate 95% of attacks on Web PKI without introducing any downsides. So far my pleas have fallen on deaf ears. And since the oligopoly of browser vendors continue their attacks on system reliability (via ridiculous expiration times) without any real pushback, I don't see it changing.
DNS is already the root of trust, certificates are domain-validated. We currently just depend on both DNS and an unelected group of random companies Google has decided jump through their arbitrary hoops often enough.
If your domain register or DNS provider is compromised in any way, all of the bullcrud the CA/B demands of certificates is entirely meaningless, the bad actor can legitimately request certificates.
Multi-perspective helps prevent MITM, it doesn't provide any better security than your domain and DNS provider's security. It's just another layer to patch over the bad idea of CAs in the first place.
Please don't use epithets like this on HN, regardless of whether they're in the discussion here. The first words in the “In Comments” section of the guidelines are “Be Kind”. Please take care to do that in all comments on HN.
The guy asking if it's FOSS has never contributed to the project, I wouldn't read too much into it. Also, F-Droid uses Cloudflare and other non-FOSS stuff for mirrors, so I doubt they would care too much about their free monitoring not being FOSS.
F-Droid is on a free tier of an open core, but not fully FOSS product. A product that is publicly listed on the stock exchange, at that!
They're throwing stones in a glass house.
It's not like their purity gets them anywhere. Google is kicking open software (already hidden and scare walled) off their platform soon and nobody will have F-Droid without permission from Google.
It's better to be pragmatic and focus on the battles that matter. Like the one against Google.
F-Droid isn't throwing stones, that's someone entirely unaffiliated with the project. F-Droid's hosting and infrastructure makes use of many projects and products that are not FOSS.
Things are "scare walled" because things are scary. Just because something claims to be OSI-fucking-open-source doesn't mean anything.
It's better to be pragmatic. Agreed. The developer community needs to get its shit together if it wants to have carvouts compared to the other ~99.9999% of users.
> Things are "scare walled" because things are scary.
It's 100% about power.
Imagine if websites were scare walled. If Microsoft had owned the Internet, that might have happened. Websites can do "scary" things, after all.
You can buy guns and knives and drive 60 miles per hour. You can give your banking information away. So many things scare the user less than Google does. Not to mention you have to go five settings deep to untick a setting to even enable it.
Again, I reiterate: It's 100% about power.
We should stop being afraid, we should stop trying to "protect the children", and we should stand up for our rights.
ehh i think it's different when they're offering an otherwise paid service specifically for open-source projects. like Cloudflare with Project Alexandria
Being entirely based on FOSS is the #1 overarching priority of the entire F-Droid project and always has been. The person who blatantly didn't even bother to check the organization they're talking to, and offered up unsolicited spam for a pointless service... is the one engaging in snobbery.
I didn't know FDroid used entirely FOSS services. Strong disagree that when offering up something free to help with a problem that's clearly being had one must first do a deep dive on the org. This conversation could be as simple as "hey we like FDroid and would happily help support it by providing our service for free to make sure this doesn't happen again", "No thanks, we only use FOSS'.
I think for a service that hosts only FOSS mobile apps, it's a pretty reasonable goal to also try to host and monitor the service using only open source tools. They may not be able to be able to do that 100%, but it's fair to ask.
It's funny, because I had the opposite reaction: I found it a little bit distasteful that, while I'm sure the guy had a genuine desire to help, he's also using F-Droid's issue tracker as a means of advertising his product, as presumably there are other people who might see that issue report and have need for it, and become a paying customer.
(To be fair, this isn't brazen spam; the "ad" is targeted and offered in the spirit of help, and if they offer perpetual free usage for open source products, he's not trying to extract money from F-Droid. But still.)
> Why would you even both responding except to declare "I am better than thou?"
Maybe don't take the most uncharitable interpretation of something said by a random person on the internet who you don't know? Someone who at least has the bona-fide of volunteering their time to help keep a valuable open source project online? Perhaps the F-Droid project does actually have a stated policy of using open source hosting/monitoring tools, and he was genuinely asking in case he missed something, and would actually like to use that service if it is indeed open source.
I think it's pretty weird to assume good faith with the Oh Dear guy's advertisement, but assume the unpaid volunteer helping run F-Droid is a holier-than-thou prat. But hey, of course, capitalism and hustle are the most important things!
> is it FOSS? I don't see any evidence that it is
Sounds like he thinks they said Oh Dear is FOSS but they just provide free accounts for Open Source projects.
Or is he talking about F-Droid?
Their CF mirror is still up.
https://cloudflare.f-droid.org/
I was just trying to learn how to use dfroidcl last night on termux and kept running into this error. I thought I was doing something wrong.
What is dfroidcl?
Nevermind. Found it.
https://fazlerabbi37.github.io/blogs/fdroidcl.html
Couldnt find in ddg though.
Idk if it's related but this week when I tried to use it fdroid on my phone it wouldn't resolve I had to reinstall the app
Licaon_Kter @licaon-kter 4 hours ago Maintainer Looks like while we have new certificates ( https://monitor.f-droid.org/services/tls-certs ) rotation failed. :(
They acknowledge rotation failed but it is still failing [1]. Perhaps something to do with how certs are rotated on their CDN?
[1] - https://www.ssllabs.com/ssltest/analyze.html?d=f%2ddroid.org...
Would not surprise me. Although it looks like F-Droid is hosted with Hetzner, I have encountered more than one failure to rotate certificates on account of Linode API changes, requiring manual update of the Python Linode API client to resolve.
> API changes
This should be an oxymoron. We've forgotten the point of an API as a profession and it's downright shameful when something this important breaks needlessly. Would it have been that hard to just keep supporting whatever API calls were in existence as e.g. "v1" and put their new stuff in "v2"?
Fixed now.
Perfect timing.
Imperfect timing
Because those ephemeral LE certificates are such a great idea...
It is, if your objective is to closely centralize the web. If you make https mandatory, via scare tactics, only people with certificates will have websites. If you make ephemeral certificates mandatory by taking advantage of a monopoly, then only big SSL providers who can afford it will survive.
Then, when you have only two or three big SSL providers, it's way easier to shut someone off by denying them a certificate, and see their site vanish in mere weeks.
Meanwhile, in the real world:
- We went from the vast majority of traffic being unencrypted, allowing any passive attacker (from nation state to script kiddie sitting in the coffee shop) to snoop and any active attacker to trivially tamper with it, to all but a vanishing minority of connections being strongly encrypted. The scare tactics used to sell VPNs in YouTube ads used to all be true, and no longer are, due to this.
- We went from TLS certificates being unaffordable to hobbyists to TLS certificates being not only free, but trivial to automatically obtain.
- We went from a CA ecosystem where only commercial alternatives exist to one where the main CA is a nonprofit run by a foundation consisting mostly of strong proponents of Internet freedom.
- Even if you count ZeroSSL and Let's Encrypt as US-controlled, there is at least one free non-US alternative using the same protocol, i.e. suitable as a drop-in replacement (https://www.actalis.com/subscription).
- Plenty of other paid but affordable alternatives exist from countless countries, and the ecosystem seems to be getting better, not worse.
- While many other paths have been used to attempt to censor web sites, I haven't seen the certificate system used for this frequently (I'm sure there are individual court orders somewhere).
- If the US wanted to put its full weight behind getting a site off the Internet, it would have other levers that would be equally or more effective.
- Most Internet freedom advocates recognize that the migration to HTTPS was a really, really good thing.
> - We went from the vast majority of traffic being unencrypted, allowing any passive attacker (from nation state to script kiddie sitting in the coffee shop) to snoop and any active attacker to trivially tamper with it, to all but a vanishing minority of connections being strongly encrypted.
I still don't understand why this is so terrible.
Public wifi networks were certainly a real problem, but that's not where the majority of internet usage happens, and they could have been fixed on a different layer.
If you're on a traditional home internet connection, who exactly can tamper with your traffic? Your ISP can, and that's not great, but it doesn't strike me as blaring siren levels of terrible, either. Even with HTTPS, the companies behind my OS and web browser can still see everything I do, so in exchange for all this work we've removed maybe 1 out of 3 parties from the equation. And, personally, I trust the OS and browser vendors less than I trust my ISP!
Some progress is better than none, and it's still nice that my ISP can't tamper with my connection any more. Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing, and as I've said previously, I trust these parties comparatively less than my ISP.
> - We went from TLS certificates being unaffordable to hobbyists to TLS certificates being not only free, but trivial to automatically obtain.
Sure, but it's also trivial to just throw up a website on Github Pages, or forgo the website completely and use Instagram. TLS is "trivial" if you rely the infrastructure of a specific external party.
Please help me understand what I'm missing because I find this really frustrating!
> If you're on a traditional home internet connection, who exactly can tamper with your traffic? Your ISP can, and that's not great, but it doesn't strike me as blaring siren levels of terrible, either.
This characterization in on the same level of sophistication as "the Internet is just a series of pipes". Every transit station has the opportunity to read or even tamper with the bytes on an unencrypted http connection. That's not just your ISP, it also includes the ISP's backbone provider, the backbone peering provider, your country's Internet Exchange, the Internet Exchange in the country of the website, the website's peering partner, and the website's hosting partner.
Some of those parties may be the same, and some parties I have not mentioned for brevity. To take just one example: there is only one direct link between Europe and South America. Most traffic between those continents goes via Amsterdam (NL) and New Jersey (US) to Barranquilla (CO), or via Sines (PT) to Fortaleza (BR). Or if the packets are feeling adventurous today, they might go through Italy, Singapore, California and Chile, with optional transit layovers in Saudi Arabia, Pakistan, Thailand or China.
Main point being: as a user, you have no control over the routing of your Internet traffic. The traffic also doesn't follow geographic rules, they follow peering cost. You can't even be sure that traffic between you and a website in your country stays inside that country.
Thanks for this, I legitimately didn't realize every interlink in the entire chain has the ability to tamper with a connection. I'm still very concerned about the centralization of https but I understand the need somewhat more.
> Some progress is better than none, and it's still nice that my ISP can't snoop on me any more. Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing, and as I've said previously, i trust these parties comparatively less than my ISP.
It might be more correct to say that Certificate Pinning made it so you can't inspect your own traffic - for sites with TLS but without certificate pinning, you can just as easily create your own root certificate and force the browser and OS to trust the cert by installing it at the OS level. This is (part of, atleast) how tools like Fiddler and Charles Proxy allow you to inspect HTTPS traffic, the other part being a mitm proxy that replaces the server's actual cert with one the mitm proxy generates [0]
[0]: https://www.charlesproxy.com/documentation/proxying/ssl-prox...
I've used mitm proxies, the problem is I don't know whether the software is behaving the same way under a proxy as it would normally.
Edit: To be clear, I'm not even suggesting the software would be doing this maliciously! Apps do all sorts of weird things when you try to proxy them, I know this because I do run most of my traffic through a proxy (for non-privacy reasons). Just for example, QUIC gets disabled.
If you're that worried about software being that devious, then you probably shouldn't be using that software at all, regardless of your ability to monitor its traffic.
I guess I think it's relatively more paranoid to worry about the ISP being that devious.
> Your ISP can
And already has! ISPs used to inject ads into unencrypted connections: https://www.infoworld.com/article/2241797/code-injection-new...
I'm not defending the practice, but informing users they've reached a data cap is really not the same thing as injecting ads!
Alright what about telling you about other plans they offer. I'd consider that an ad: https://lukerodgers.ca/2023/12/09/optimum-isp-is-mitming-its...
In my country's ISP, they outright force you to see an ad for 5s before you can open a webpage sometimes.
I mean I trust Linux and Firefox both being open source more than isp
> I still don't understand why this is so terrible.
While I don't really have a scary threat model, I don't love the idea that my ISP could have been watching my traffic. Maybe there's a world where my government has ordered ISPs to log specifics about traffic in order to trap dissidents doing things they don't like. But sure, I live in the US, which isn't (yet) an authoritarian nightmare (yet!). But maybe I live in Texas, and I'm searching for information about getting an abortion (illegal to have one there in most cases). Maybe I'm a schoolteacher in Florida, and I'm searching information on critical race theory (a topic banned from instruction in Florida schools). I want that traffic to be private.
> Even with HTTPS, the companies behind my OS and web browser can still see everything I do, so in exchange for all this work we've removed maybe 1 out of 3 parties from the equation
I mean, that's on you for using a proprietary OS owned by a for-profit corporation. I get that desktop Linux or a de-Googled Android phone isn't for everyone, but those are options you have, if you're really worried.
And there are quite a few major browsers that are open source, so even if you can't inspect their traffic at runtime, if you really are truly serious about this, you can audit their source code and do your own builds. Yes, I would consider that unnecessarily paranoid, but the option is there for you, and you can even run these browsers on proprietary OSes. And honestly, I assume you use Chrome anyway; if that's the case then you clearly are not serious about this if you're using a web browser made by an advertising company. (If you're using something else: awesome, and apologies for the bad assumption.)
> Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing
You can still do this, but it does require more work setting up your own CA and installing it as trusted in your own devices, and them MitM'ing your traffic at the router in order to present a cert from your CA before forwarding the connection on to the real site.
Yes, this is out of reach for the average home internet user, but if you are the kind of person who is thinking about doing traffic monitoring on your home network, then you have the skills to do this. Meanwhile, the other 99% of us get better privacy online; I think that's a perfectly fine trade off.
> and as I've said previously, I trust [my OS and browser vendor] comparatively less than my ISP.
My ISP is Comcast; even if my OS and browser vendor was Microsoft or Apple, I think I'd probably still trust Comcast less. Fortunately my OS and browser vendors are not Microsoft or Apple, so I don't have to worry about that, but still.
> Sure, but it's also trivial to just throw up a website on Github Pages, or forgo the website completely and use Instagram. TLS is "trivial" if you rely the infrastructure of a specific external party.
Running a website, even from your home internet connection, still means relying on the infrastructure of a third party. There's no way to get away from that.
And you still can run one without TLS. Browsers will still display unencrypted pages, though I'll admit that I'd be unsurprised if some future versions of major browsers stopped allowing that, or made it look scary to your average user.
> Please help me understand what I'm missing because I find this really frustrating!
I think what you are missing is that people actually do value connection encryption, for real reasons, not paranoid, tin-foil-hat reasons. And while you do present some valid downsides, we believe those downsides are overblown, or at the very least worth it in the trade off. It's fine for you to not agree with that trade off, which is a shame, but... that's life.
Have you checked the list of root certificates your browser accepts as good?
Do it and tell me you trust websites which have a green lock next to the url..
Yes, the trust model for TLS is broken and the handful of attempts made to fix it (Moxie's "Convergence" project from 2011[1], for instance) haven't born fruit.
However, in a security context "takes some effort" is far better than "takes no effort".
If CAA records (with DNSSEC) were used to reject certificates from the wrong issuer, we might even be able to get to "though very imperfect, takes a considerable amount of effort".
DANE is supposed to be the solution to this problem but it's absolutely awful to use and will lead to even more fragile infrastructure than we currently have with TLS certs (and also ultimately depends on DNSSEC). HPKP was the non-DNS solution but it was removed because it suffered from an even worse form of fragility that could lock out domains for years.
[1]: https://en.m.wikipedia.org/wiki/Convergence_(SSL)
With centralised trust, encryption is meaningless.
Trust isn't that centralized. Thanks to Certificate Transparency, a CA cannot issue a certificate that will be accepted by Chrome or Firefox without the site owner being able to detect that.
Strong players in the web space are also trying to centralize DNS resolvers, using similar arguments.
Moar encryption, much secure.
Meanwhile, in the real world:
- We now provide a completely free certs for a malicious web-sites
- Degraded encryption value so much it's not even indicated anymore (remember the green bar for EV?)
- Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
- SNI exists and even without it anything not on CDN is blocked very easily
What is stupid is not to provide free certs for malicious web sites, what was stupid is telling people for years that they would be safe and could trust a website only because there was a lock icon on the url bar.
> Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
Do we have any statistics for how many people are actually doing this? Such warnings are so rare in my experience that, by default, I don't trust a site that has no SSL/expired or invalid certs and won't click through if I see that warning.
The only one of those things that is the fault of ACME is the first one, and are you really suggesting between that and your second bullet point that we should charge money for encryption so that people value it more? Encryption is free so people do it more. Paying money doesn't actually make people trustworthy. (Though you can totally charge people to prove they aren't malicious, but if you want to do that, why tie it to encryption? Encrypt regardless.)
> Paying money doesn't actually make people trustworthy.
This is fundamentally a naive understanding of both security and certificates. Paying money absolutely makes people trustworthy because it's prohibitive to do it at scale. You might have one paid malicious certificate but you can have thousands of free ones. The one malicious domain gets banned, the thousands are whack-a-mole forever.
Further, certificates used to indicate identity in more than a "the domain you are connected to" sense. There was a big PR campaign to wreck EV certs but EV certs generally were extremely secure. And even Google, who most loudly complained about EV, has reintroduced the Verified Mark Certificate (VMC) to replace it and use for new things like BIMI.
EV certs didn't actually afford the guarantees people hoped and expected. I could simply spend a few hundred dollars to register "Stripe, LLC" or "Microsoft, Inc." in my local jurisdiction, and then get an EV cert with that name on it.
Browser vendors removed the extra UI around EV certs not because certs in general are easier to get, but because the identity "guarantee" afforded to EV certs was fairly easy to spoof. EV certs still exist, and you can foolishly pay for one if you want. Free ACME-provided certs has nothing to do with this.
Again, this is an incredibly naive and uninformed take. Yes. You can spend hundreds of dollars to make one attempt at malicious activity, and yeah, that could also be fixed by tweaking EV requirements. (More than likely by putting a country flag on the EV banner.) One person as an example managing to get a problematic EV cert is not a sign of a broken system, it's a sign of a working system that only a few edge case examples exist.
Cybercriminals work at scale. The opinion you shared here is why Google, Microsoft, and Amazon are so easy to use for cybercrime. It's incredibly easy to hide bad behavior in cheap, disposable attempts on free accounts.
Cost virtually eliminates abuse. Bad actors are fronting effort and ideally small amounts of money to effectively bet on a high return. You make the cost to attempt high, it isn't worth it. Apart from some high profile blogs demonstrating the risk, EV certs have to my knowledge never been used maliciously, and hiding them from the browser bar just makes useful, high quality data about the trustworthiness of a site buried behind hidden menus.
It didn't take a "big PR campaign". EV is crap. It was crap when it was created, it's still crap now. It was tolerated because the commercial CAs wanted a new shinier product and in exchange we got the BRs and from there we got here. Don't lean on EV, it's superficial not load bearing.
I don't much care about BIMI. People keep trying to resuscitate that particular dead dog (email security), maybe one day they will succeed but I don't expect to be involved.
> We now provide a completely free certs for a malicious web-sites
Malicious websites never had a problem buying certs before. Sure, the bar is lower now, but I don't think it was a particularly meaningful bar before. Besides, the most common ways to get malicious websites shut down are to get their webhost to cut them off, or get a court order to seize their domain name. Getting their TLS cert revoked isn't common, and doesn't really do the job anyway.
> Degraded encryption value so much it's not even indicated anymore (remember the green bar for EV?)
No, we've degraded the identity verification afforded by EV and those former browser features. Remember that the promise of SSL/TLS was two things: 1) your traffic is private, 2) it verifies that the server you thought you were contacting is actually the one you reached.
I think (2) was always going to be difficult: either you make it hard and expensive to acquire TLS certificates, and (2) has value, or you don't, and it doesn't. I think pervasive encryption is way more important than site owner identity validation. And I don't think the value of an EV cert was even all that high back when browsers called them out in their UI. There are lots of examples of people trivially managing to get an EV cert from somewhere, with their locally-registered "Stripe, LLC" or whatever in the "validated" company name field of their cert.
> Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
Not sure what that has to do with this. That was more of a problem back when we didn't have Let's Encrypt, so lots of people were using self-signed certs, or let their certs expire and didn't fix it, or whatever. These days I expect certificate warnings are fairly rare, and so users might actually start paying attention to them again.
> SNI exists and even without it anything not on CDN is blocked very easily
ESNI also exists, and while not being available everywhere, it'll get there. But this is a bizarre complaint, as it's entirely trivial to block traffic when there's no TLS at all.
You don't need short expirations for that. CRLs/OCSP already provided a mechanism for certificates to be revoked before they expire.
However, short expirations severely limit the damage an attacker can do if they steal your private key.
And they avoid the situations where an organization simply forgets to renew a cert, because automating something so infrequent is genuinely difficult from an organizational standpoint. Employees leave, calendar reminders go missing, and yeah.
CRLs are becoming bulky, and OCSP have some privacy implications (telling the CA which websites you visit), plus most browsers are set to soft fail if there's an outage and the request can't be made instead of a hard fail and making the website inaccessible, reducing the security and usefulness of OCSP.
Short-lived certificates fixes these issues from an end-user standpoint.
There are new solutions for CRL just last month:
https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
Yup. If your primary goal was fast, efficient certificate revocation, then having certs that still take 90 days to expire rather than 2 years is not the solution you'd come up with.
CRLite updates every 12 hours.
If you have short validity times for certificates it also means you have shorter CRL.
This has existed for a while. It doesn’t address another major issue with revocation: user agents that aren’t browsers don’t implement it.
Is it possible that one day certificate expiration will be a thing of the past?
How would they get recurring revenue/donations then?
CRL/OCSP had limited effect in practice. A revoked certificate would, if I remember correctly, continue to be accepted by many if not most browsers (by market share).
It's because CRLs/OCSP sucks so now short expiration is rolling out.
CRL doesn’t suck it is just not easy problem on web scale.
But seems like there is feasible solution: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
AKA they suck in this context
Was CRL designed with this context in mind?
Doesn’t matter. I think we’re fighting semantics.
Certificates are cached trust, and all the cache busting problem applies here.
> You don't need short expirations for that. CRLs/OCSP already provided a mechanism for certificates to be revoked before they expire.
But that's an explicit action that's much simpler to ask questions about.
Great explanation and very apt for out time when we regularly hear of people being banned/debanked/jailed for their political views in western countries.
The web PKI is certainly a potential point of failure in online communications, but fortunately there is almost no history of certificate revocation over content disputes. The biggest targets have been domain name registrars and CDNs.
Let's Encrypt has emphasized that it doesn't have the resources to investigate content disputes (currently, it's issuing nearly 10 million certificates per day, with no human intervention for any of them) and that having to adjudicate who's entitled to have a certificate by non-automated criteria would throw the model of free-of-charge certificates into doubt.
Meanwhile, encrypting web traffic makes it harder for governments to know who is reading or saying what. (Not always impossible, just harder.) Without it, we could have phenomena like keyword searches over Internet traffic in order to instantly determine who's searching for or otherwise reading or writing specific terms!
I'm very aware that it's still easy to observe who visits a particular site (based on SNI, as someone else mentioned in this thread). But there's a chicken-and-egg problem for protecting that information, and encrypting the actual site traffic is at least the chicken, while the egg may be coming with ECH.
Overall, transit encryption is very good for free expression online, and people who want to undermine or limit online speech are much more likely to be trying to undermine encryption than to promote it.
The biggest thing that Let's Encrypt in particular does to mitigate the risk of being unable to serve particular subscribers is to ensure that ACME is an open protocol that can be implemented by different CAs, and that it's very easy for subscribers to switch CAs at any time for any reason. The certificate system is more centralized than many people involved with it would prefer, but at least it's avoiding vendor lock-in.
Yeah WebPKI is basically perfectly designed to facilitate deplatforming.
In caddy it takes more effort to NOT have https.
Certainly with yearlong or multi year certs nobody of note ever forgot to renew them, right? https://hn.algolia.com/?dateEnd=1416268800&dateRange=custom&...
They are. Unbreakable crypto for free, your clients don't have to exchange keys with you in person, and the only cost is running a script on a server that has to run automated code all day anyway
This hot take is made better by the fact that the site in your profile has LE certs
Running basic automation to keep your certificates renewed is not difficult. I do this on my toy website and it’s been working without me touching it since before the pandemic.
DANE would be better than LE, but weirdly the massive companies building browsers don't want to provide support. Spooky!
https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...
DANE is not a good idea. It makes DNS the CA. DNS doesn't have any stringent security requirements to its design or operation, as CAs do. It depends on a problematic protocol (that, among other things, limits the ability to deal with different operational and failure modes). And just because a nameserver provides a record, doesn't mean an authorized domain owner wanted that record to be an authorized secure transport. (not to mention, it would force people choose domain TLDs based on political positions, rather than, say, a desire for an easy to remember name)
It's weak security and introduces more problems than it solves. If we're going to get rid of CAs, we should consider a better solution, not a worse one.
No, it doesn't. It makes the registrar the CA. Which makes sense, they already authorize who owns which domain. They should absolutely do so cryptographically in some fashion.
What's weird is that the major registrars never even tried to enter the PKI business. It would have made sense. It would even have hastened the adoption of much needed TLS extensions.
The registrar is not equivalent to a CA in DANE.
- A CA validates requests, signs CSRs, publishes cert revocation, issues certificates and trust anchors.
- A registrar in DANE merely passes a DS record you created to the TLD, along with the promise that this record was created by the domain zone owner. It's basically the validation step. Nothing to do with establishing or securing data, key/record management, etc; they're a glorified FTP tool.
I'm in favor of registrars getting more involved (since they are the authority on who controls a domain), but only with a completely different design. I have suggested many times that CAs establish an API to communicate directly with Registrars to perform the validation step, as this would eliminate 95% of attacks on Web PKI without introducing any downsides. So far my pleas have fallen on deaf ears. And since the oligopoly of browser vendors continue their attacks on system reliability (via ridiculous expiration times) without any real pushback, I don't see it changing.
You’re just moving your root of trust to DNS then?
With certificates we’re doing multi perspective validation.
DNS root of trust is silly. DNSSEC is not a proper root of trust
DNS is already the root of trust, certificates are domain-validated. We currently just depend on both DNS and an unelected group of random companies Google has decided jump through their arbitrary hoops often enough.
If your domain register or DNS provider is compromised in any way, all of the bullcrud the CA/B demands of certificates is entirely meaningless, the bad actor can legitimately request certificates.
This is what multi perspective helps with. It doesn’t mitigate every single attack.
But think about what DANE is for a second. If a bad actor is MITMing your connection to some endpoint, they certainly can MITM your DNS queries too.
Multi-perspective helps prevent MITM, it doesn't provide any better security than your domain and DNS provider's security. It's just another layer to patch over the bad idea of CAs in the first place.
[flagged]
> The prat
Please don't use epithets like this on HN, regardless of whether they're in the discussion here. The first words in the “In Comments” section of the guidelines are “Be Kind”. Please take care to do that in all comments on HN.
https://news.ycombinator.com/newsfaq.html
The guy asking if it's FOSS has never contributed to the project, I wouldn't read too much into it. Also, F-Droid uses Cloudflare and other non-FOSS stuff for mirrors, so I doubt they would care too much about their free monitoring not being FOSS.
Honestly, someone coming in unasked and trying to get you on the free plan of their own product, is kind of rude.
F-Droid is on a free tier of an open core, but not fully FOSS product. A product that is publicly listed on the stock exchange, at that!
They're throwing stones in a glass house.
It's not like their purity gets them anywhere. Google is kicking open software (already hidden and scare walled) off their platform soon and nobody will have F-Droid without permission from Google.
It's better to be pragmatic and focus on the battles that matter. Like the one against Google.
F-Droid isn't throwing stones, that's someone entirely unaffiliated with the project. F-Droid's hosting and infrastructure makes use of many projects and products that are not FOSS.
Thank you for clarifying. I retract my prior statement in shame.
Whomever is saying this kind of stuff on behalf of a project they're unaffiliated with has some serious gall.
Things are "scare walled" because things are scary. Just because something claims to be OSI-fucking-open-source doesn't mean anything.
It's better to be pragmatic. Agreed. The developer community needs to get its shit together if it wants to have carvouts compared to the other ~99.9999% of users.
> Things are "scare walled" because things are scary.
It's 100% about power.
Imagine if websites were scare walled. If Microsoft had owned the Internet, that might have happened. Websites can do "scary" things, after all.
You can buy guns and knives and drive 60 miles per hour. You can give your banking information away. So many things scare the user less than Google does. Not to mention you have to go five settings deep to untick a setting to even enable it.
Again, I reiterate: It's 100% about power.
We should stop being afraid, we should stop trying to "protect the children", and we should stand up for our rights.
ehh i think it's different when they're offering an otherwise paid service specifically for open-source projects. like Cloudflare with Project Alexandria
Being entirely based on FOSS is the #1 overarching priority of the entire F-Droid project and always has been. The person who blatantly didn't even bother to check the organization they're talking to, and offered up unsolicited spam for a pointless service... is the one engaging in snobbery.
I didn't know FDroid used entirely FOSS services. Strong disagree that when offering up something free to help with a problem that's clearly being had one must first do a deep dive on the org. This conversation could be as simple as "hey we like FDroid and would happily help support it by providing our service for free to make sure this doesn't happen again", "No thanks, we only use FOSS'.
I think for a service that hosts only FOSS mobile apps, it's a pretty reasonable goal to also try to host and monitor the service using only open source tools. They may not be able to be able to do that 100%, but it's fair to ask.
It's funny, because I had the opposite reaction: I found it a little bit distasteful that, while I'm sure the guy had a genuine desire to help, he's also using F-Droid's issue tracker as a means of advertising his product, as presumably there are other people who might see that issue report and have need for it, and become a paying customer.
(To be fair, this isn't brazen spam; the "ad" is targeted and offered in the spirit of help, and if they offer perpetual free usage for open source products, he's not trying to extract money from F-Droid. But still.)
> Why would you even both responding except to declare "I am better than thou?"
Maybe don't take the most uncharitable interpretation of something said by a random person on the internet who you don't know? Someone who at least has the bona-fide of volunteering their time to help keep a valuable open source project online? Perhaps the F-Droid project does actually have a stated policy of using open source hosting/monitoring tools, and he was genuinely asking in case he missed something, and would actually like to use that service if it is indeed open source.
I think it's pretty weird to assume good faith with the Oh Dear guy's advertisement, but assume the unpaid volunteer helping run F-Droid is a holier-than-thou prat. But hey, of course, capitalism and hustle are the most important things!
> is it FOSS? I don't see any evidence that it is Sounds like he thinks they said Oh Dear is FOSS but they just provide free accounts for Open Source projects. Or is he talking about F-Droid?