I actually made the unusual decision, last year, to go IPv6-only on a small website I operate. The reason why is that AWS changed their billing policy for public IPv4 addresses. This is a tiny website that people only access on their cell phones, so I can accept an occasional inconvenience for the marginal cost savings.
I haven't heard of anyone else doing this, but I doubt I'm completely alone in trying to minimize hosting costs.
I did this as well for 3 domains that use the same EC2. There were random connectivity problems from my phone for a couple months, but it works perfectly for the past 6 months. I have no idea why the issues went away.
I don't understand the title. What's the "but" there? IPv6 being irrelevant and moving off IPv4 being irrelevant seem like they go hand in hand, if moving off IPv4 is irrelevant then of course IPv6 is irrelevant!
Don't think about it too much, just remember that only large corporations are supposed to be peers on the internet, everyone else should be behind layers of NAT to ensure they can't become a problem.
The resistance to switch to ipv6, or the comfort with the ipv4-born address exhaustion remedies, only helps an internet of consumers, not an internet of peers that create and share. If you are behind NAT or CG-NAT, you can only consume, not create. You can't host a server, expose a port. You are at the mercy of the big fish.
It is the ISPs, that pretty much killed the IPv6 with their mishandled transition.
Where I'm, I can choose 1 out of 1 broadband provider available in the area. With this provider, I can either have a public IPv4 address (or several) with their CPE in bridge mode, or DS-Lite, with IPv4 CGNAT without PCP and /64 for the IPv6 addresses (i.e. no address space for subnets, no prefix distribution) AND having to use their router with the limited settings they allow.
With offers like these, is it any wonder that I stick with IPv4?
Are you sure about this? It’s in the rfc from like 1998 that ISPs should allow customers to sla for larger prefixes. I don’t know a single US isp that doesn’t allow at least a 56.
IPv6 is pointless and still a security risk but I’m guessing you’re misconfiguring something.
99.99% of people who create and share things via the internet do so via centralized social media providers, and that would continue to be true if the whole world were magically IPv6-only.
I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
> I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
It's not just about self-hosting, but peer-to-peer clients as well.
When Skype originally came out it was P2P, but because of NAT they created (ran?) "super-nodes" that could do things like STUN/TURN/ICE. Wouldn't it be nice to be able to (e.g.) communicate with folks without a central authoritative server that could be warranted by various regimes?
And there are different kinds of big fish. You may be in a bad neighborhood, sharing IP with misbehaved actors on the digital or real world. You may get blocked, banned or snooped because there is or was a target, an attacker or someone with bad digital hygiene.
That's only true if you aren't behind CG-NAT. If you are, your firewall can port forward all it wants but it won't matter, the ISP would have to also port forward to you.
Even in this situation, your ISP can port forward to you.
While not universal, some ISPs support PCP, where you can ask for a port mapping to your CGNAT-ed IP and port. They might or might not honor the external port (if it is taken, they obviously cannot), but you will get some hole punched.
Some do. But when they don't, it is not a fault of CGNAT - which does provide the capability -- but a fault of specific ISP, that's not willing to use it.
Did you miss the part about CG-NAT? Once your ISP runs out of their IP4 addresses and puts you behind a CG-NAT, you can punch all the holes you like; nothing is going to get to you.
At least not without doing fancy stuff like using an externally-hosted VPN to shuttle connections to you.
I've yet to see a single ISP (I live in the US) that even allows customers to host services. If you look in the TOS for services like Comcast, AT&T, T-Mobile, etc, you'll see a part about hosting services being forbidden. And that's even for normal IP4 addresses that aren't behind CG-NAT. Now, they probably don't look too hard unless you give them reason (I hosted various things over a Comcast connection for a decade) but the rule is in there.
Perhaps it's different for a mom & pop ISP, but I don't see the big ones configuring anything that makes it easier to do what they already don't want you doing anyway. They see the inability to forward ports as a feature, not a bug.
I'm not in US, but in EU. Here, T-Mobile or Orange do not have a problem with incoming traffic, and they know that people have security cameras, doorbells, or NAS devices in their homes that they want access from outside.
So even if you expose your Home Assistant web to the wide web, no ISP is going to have a problem with that and won't interpret it as hosting services. What they really want is that you don't run a bandwidth intensive services on a consumer connection, which is going to be overbooked somewhere in their infra, causing service degradation to other users.
And for example Orange does provide PCP for their CGNAT.
> My ISP is IPv4 only and I host plenty of shit and punch plenty of holes. That’s a function of my firewall not how many bits are in my IP address.
Not wrong, but if you want multiple servers of the same service, you're now doing custom ports (myhost:port1, myhost:port2, etc) which isn't the end of the world, but is kind of sucky.
And if we're not talking just about servers running services, but clients that want to do peer-to-peer stuff, you also have to use things like STUN/TURN/ICE which is more infrastructure that is needed (as opposed to 'just' hole punching since your system already knows its IP(v6) address).
Given the prevalence of these technologies (kludges?) they've kind of been normalized so we think they're "fine".
IPv6 and Python 3 are case studies in How to Not Upgrade Something.
They basically created entirely different products that provided a marginal immediate benefit to the users and then said "upgrade whenever you get around to it". They are both now in the 2nd decade of their upgrade cycle.
PowerPC->Intel, Xbox/PlayStation emulation, x86 32-bit>64-bit, and Java are all technologies that had successful upgrade strategies that were centered around replacing the original product rather than indefinitely providing an alternative.
Felt like the article went off into lala land towards the end:
“The last couple of decades have seen us stripping out network-centric functionality and replacing this with an undistinguished commodity packet transport medium. It's fast and cheap, but it's up to applications to overlay this common basic service with its own requirements." The result is networks become "simple dumb pipes!"
Given that, Huston wonders if it's time to revisit the definition of the internet as networks that use a common shared transmission fabric, a common suite of protocols and a common protocol address pool.
Rather, he posits "Is today's network more like 'a disparate collection of services that share common referential mechanisms using a common namespace?'"
Lots of consumer router/gateways out there are old and don't have IPv6 turned on. If you turn off IPv4 for your services, those folks just won't be able to connect and they won't know why, they'll just move on. Any who want to complain won't even know who to complain to, and they won't be able to fix it themselves. That's bad. And yes, you could put a CDN in front of everything, but won't that wipe out some/all of the savings (assuming you're with a cloud provider that's charging for IPv4)? This basically explains why I put the brakes on our IPv6 migrations. From our clients' perspective it would have been an unforced error.
The argument is somewhat thin "CDNs use DNS so it doesn't matter what the IP is"
I mean yes, that true, but it should have always been true. Dishing out raw IPs is bad anyway, it limits your flexibility (yes yes anycast exists, but if you're big enough to setup any cast, I bet you're using ipv6 internally already)
Ipv6 is here, and will slowly grow as time goes on. There will be growing pains, but its plain cheaper to run at any kind of scale. (especially now AWS are charging for public IPv4 addresses)
If you are hosting thousands of servers, and you haven't drunken the batshit K8s networking schema, then ipv6 becomes really rather practical, especially if you are giving out unique addresses to containers 10.0.0.0/8 runs out pretty quick.
Incidentally, if you want to play with anycast on the public Internet, BuyVM[1] will let you do that on $10.50/mo (3×$3.50/mo very resource-limited VPSes). Catching those VPSes when they’re in stock is something of an ordeal, though.
Except it's not 16 million containers. You need addressing for every link in the network. Want 10 racks of 20 servers with redundant switching in each, redundant spines in the network rack? That's 60 physical links, each of which need two addresses, and that's just the fabric underlay network. You haven't created the actual overlay networks yet, which you need multiple of (front end, back end, database, storage...).
Consider that 200 servers is a drop in the bucket at some scales, you can see why data centre is moving to V6 only.
So, if you have a sane ipv4 network, where everything is dhcp, and nothing apart from a few key things are statically assigned, then yeah _technically_ you can have 16million addresses all at once.
But, subnets need to be located next to each other physically, otherwise performance suffers. subnets have affinity.
but once you have subnets, you then start loosing packing efficiency.
for example, in the batshit world of K8s, you give each node its own /24 to dish out. Not only cant that limit the number of containers you can host, it also is really inefficient. (eating 256k addresses)
More over, it also means that you need to reuse addresses. in a large cluster of say 1000 nodes, each hosting 40 containers, starting/stopping anything up to 30 containers a second isn't unreasonable. Its not inconceivable that you'll end up trying to connect to a stale address (either because its not propagated yet, or your brand of service discovery isn't that fast). This can cause hilarious transitory errors.
but if you could assign an IP per container, and have enough space to not re-use that address for at least a few hours then that goes away. so instead of getting weird fuzzing errors(or misc 404/401), you get a connection timed out.
I am going to give them the benefit of the doubt. Maybe they are thinking it would probably run out due to poor segmentation? It would be quite the interesting problem of network segmentation to run your whole service provider with thousands of customers and network out of a single 10 network. Realistically a better option was each customer gets their own locked off network and vlan level.
Have you ever been involved in a corporate merger? IP conflicts are a huge pain point. Quite often you have to NAT with-in the company itself because the acquirer and acquiree are both using 10/8.
Anycast and SNI mean someone like cloud flare only needs one IPv4 for their entire public facing service? Is that the gist? Obviously I exaggerate but I think that's their point?
I actually made the unusual decision, last year, to go IPv6-only on a small website I operate. The reason why is that AWS changed their billing policy for public IPv4 addresses. This is a tiny website that people only access on their cell phones, so I can accept an occasional inconvenience for the marginal cost savings.
I haven't heard of anyone else doing this, but I doubt I'm completely alone in trying to minimize hosting costs.
IF you need IPv4 for some users like me where ISPs is IPv4 only then you can use a CloudFlare or CloudFront and expose IPv4 too.
I did this as well for 3 domains that use the same EC2. There were random connectivity problems from my phone for a couple months, but it works perfectly for the past 6 months. I have no idea why the issues went away.
On the other hand, same as you I tried IPv6-only to reduce costs but I couldn't load my website at all on my mobile.
I don't understand the title. What's the "but" there? IPv6 being irrelevant and moving off IPv4 being irrelevant seem like they go hand in hand, if moving off IPv4 is irrelevant then of course IPv6 is irrelevant!
Don't think about it too much, just remember that only large corporations are supposed to be peers on the internet, everyone else should be behind layers of NAT to ensure they can't become a problem.
The resistance to switch to ipv6, or the comfort with the ipv4-born address exhaustion remedies, only helps an internet of consumers, not an internet of peers that create and share. If you are behind NAT or CG-NAT, you can only consume, not create. You can't host a server, expose a port. You are at the mercy of the big fish.
It is the ISPs, that pretty much killed the IPv6 with their mishandled transition.
Where I'm, I can choose 1 out of 1 broadband provider available in the area. With this provider, I can either have a public IPv4 address (or several) with their CPE in bridge mode, or DS-Lite, with IPv4 CGNAT without PCP and /64 for the IPv6 addresses (i.e. no address space for subnets, no prefix distribution) AND having to use their router with the limited settings they allow.
With offers like these, is it any wonder that I stick with IPv4?
Are you sure about this? It’s in the rfc from like 1998 that ISPs should allow customers to sla for larger prefixes. I don’t know a single US isp that doesn’t allow at least a 56.
IPv6 is pointless and still a security risk but I’m guessing you’re misconfiguring something.
Yup, Liberty Global (also known as UPC) in Europe.
Assigning only /64 & no DHCP-PD. There's not much to misconfigure, since in IPv6 you have to use their router and they are pushing the config.
And since you have only /64, you cannot put another router behind theirs.
99.99% of people who create and share things via the internet do so via centralized social media providers, and that would continue to be true if the whole world were magically IPv6-only.
I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
> I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
It's not just about self-hosting, but peer-to-peer clients as well.
When Skype originally came out it was P2P, but because of NAT they created (ran?) "super-nodes" that could do things like STUN/TURN/ICE. Wouldn't it be nice to be able to (e.g.) communicate with folks without a central authoritative server that could be warranted by various regimes?
And there are different kinds of big fish. You may be in a bad neighborhood, sharing IP with misbehaved actors on the digital or real world. You may get blocked, banned or snooped because there is or was a target, an attacker or someone with bad digital hygiene.
My ISP is IPv4 only and I host plenty of shit and punch plenty of holes. That’s a function of my firewall not how many bits are in my IP address.
That's only true if you aren't behind CG-NAT. If you are, your firewall can port forward all it wants but it won't matter, the ISP would have to also port forward to you.
Even in this situation, your ISP can port forward to you.
While not universal, some ISPs support PCP, where you can ask for a port mapping to your CGNAT-ed IP and port. They might or might not honor the external port (if it is taken, they obviously cannot), but you will get some hole punched.
> your ISP can port forward to you
But will they? Domestic ISPs are pretty hesitant to offer such, or anything at that manner.
Some do. But when they don't, it is not a fault of CGNAT - which does provide the capability -- but a fault of specific ISP, that's not willing to use it.
You can’t punch any holes through carrier-grade NAT (CGNAT).
You can, if your ISP cooperates, using PCP.
Frankly, you lost me at "if your ISP cooperates".
It is a function of the CGNAT at the ISP side. They need to have that enabled. Some do.
Did you miss the part about CG-NAT? Once your ISP runs out of their IP4 addresses and puts you behind a CG-NAT, you can punch all the holes you like; nothing is going to get to you.
At least not without doing fancy stuff like using an externally-hosted VPN to shuttle connections to you.
People seem to have misconceptions about CGNAT.
Of course you can punch holes there. CGNATs can be asked for port forwarding using PCP, unless your ISP disabled that.
I've yet to see a single ISP (I live in the US) that even allows customers to host services. If you look in the TOS for services like Comcast, AT&T, T-Mobile, etc, you'll see a part about hosting services being forbidden. And that's even for normal IP4 addresses that aren't behind CG-NAT. Now, they probably don't look too hard unless you give them reason (I hosted various things over a Comcast connection for a decade) but the rule is in there.
Perhaps it's different for a mom & pop ISP, but I don't see the big ones configuring anything that makes it easier to do what they already don't want you doing anyway. They see the inability to forward ports as a feature, not a bug.
I'm not in US, but in EU. Here, T-Mobile or Orange do not have a problem with incoming traffic, and they know that people have security cameras, doorbells, or NAS devices in their homes that they want access from outside.
So even if you expose your Home Assistant web to the wide web, no ISP is going to have a problem with that and won't interpret it as hosting services. What they really want is that you don't run a bandwidth intensive services on a consumer connection, which is going to be overbooked somewhere in their infra, causing service degradation to other users.
And for example Orange does provide PCP for their CGNAT.
The GP has both versions, not just CGNAT (which would have made their comment less nonsensical):
> If you are behind NAT or CG-NAT
> My ISP is IPv4 only and I host plenty of shit and punch plenty of holes. That’s a function of my firewall not how many bits are in my IP address.
Not wrong, but if you want multiple servers of the same service, you're now doing custom ports (myhost:port1, myhost:port2, etc) which isn't the end of the world, but is kind of sucky.
And if we're not talking just about servers running services, but clients that want to do peer-to-peer stuff, you also have to use things like STUN/TURN/ICE which is more infrastructure that is needed (as opposed to 'just' hole punching since your system already knows its IP(v6) address).
Given the prevalence of these technologies (kludges?) they've kind of been normalized so we think they're "fine".
IPv6 and Python 3 are case studies in How to Not Upgrade Something.
They basically created entirely different products that provided a marginal immediate benefit to the users and then said "upgrade whenever you get around to it". They are both now in the 2nd decade of their upgrade cycle.
PowerPC->Intel, Xbox/PlayStation emulation, x86 32-bit>64-bit, and Java are all technologies that had successful upgrade strategies that were centered around replacing the original product rather than indefinitely providing an alternative.
Felt like the article went off into lala land towards the end:
“The last couple of decades have seen us stripping out network-centric functionality and replacing this with an undistinguished commodity packet transport medium. It's fast and cheap, but it's up to applications to overlay this common basic service with its own requirements." The result is networks become "simple dumb pipes!"
Given that, Huston wonders if it's time to revisit the definition of the internet as networks that use a common shared transmission fabric, a common suite of protocols and a common protocol address pool.
Rather, he posits "Is today's network more like 'a disparate collection of services that share common referential mechanisms using a common namespace?'"
Lots of consumer router/gateways out there are old and don't have IPv6 turned on. If you turn off IPv4 for your services, those folks just won't be able to connect and they won't know why, they'll just move on. Any who want to complain won't even know who to complain to, and they won't be able to fix it themselves. That's bad. And yes, you could put a CDN in front of everything, but won't that wipe out some/all of the savings (assuming you're with a cloud provider that's charging for IPv4)? This basically explains why I put the brakes on our IPv6 migrations. From our clients' perspective it would have been an unforced error.
I mean its an opinion.
The argument is somewhat thin "CDNs use DNS so it doesn't matter what the IP is"
I mean yes, that true, but it should have always been true. Dishing out raw IPs is bad anyway, it limits your flexibility (yes yes anycast exists, but if you're big enough to setup any cast, I bet you're using ipv6 internally already)
Ipv6 is here, and will slowly grow as time goes on. There will be growing pains, but its plain cheaper to run at any kind of scale. (especially now AWS are charging for public IPv4 addresses)
If you are hosting thousands of servers, and you haven't drunken the batshit K8s networking schema, then ipv6 becomes really rather practical, especially if you are giving out unique addresses to containers 10.0.0.0/8 runs out pretty quick.
> if you're big enough to setup anycast
Incidentally, if you want to play with anycast on the public Internet, BuyVM[1] will let you do that on $10.50/mo (3×$3.50/mo very resource-limited VPSes). Catching those VPSes when they’re in stock is something of an ordeal, though.
[1] https://buyvm.net/anycast-vps/
> 10.0.0.0/8 runs out pretty quick
16,777,216 containers, wow.
Except it's not 16 million containers. You need addressing for every link in the network. Want 10 racks of 20 servers with redundant switching in each, redundant spines in the network rack? That's 60 physical links, each of which need two addresses, and that's just the fabric underlay network. You haven't created the actual overlay networks yet, which you need multiple of (front end, back end, database, storage...).
Consider that 200 servers is a drop in the bucket at some scales, you can see why data centre is moving to V6 only.
So, if you have a sane ipv4 network, where everything is dhcp, and nothing apart from a few key things are statically assigned, then yeah _technically_ you can have 16million addresses all at once.
But, subnets need to be located next to each other physically, otherwise performance suffers. subnets have affinity.
but once you have subnets, you then start loosing packing efficiency.
for example, in the batshit world of K8s, you give each node its own /24 to dish out. Not only cant that limit the number of containers you can host, it also is really inefficient. (eating 256k addresses)
More over, it also means that you need to reuse addresses. in a large cluster of say 1000 nodes, each hosting 40 containers, starting/stopping anything up to 30 containers a second isn't unreasonable. Its not inconceivable that you'll end up trying to connect to a stale address (either because its not propagated yet, or your brand of service discovery isn't that fast). This can cause hilarious transitory errors.
but if you could assign an IP per container, and have enough space to not re-use that address for at least a few hours then that goes away. so instead of getting weird fuzzing errors(or misc 404/401), you get a connection timed out.
I am going to give them the benefit of the doubt. Maybe they are thinking it would probably run out due to poor segmentation? It would be quite the interesting problem of network segmentation to run your whole service provider with thousands of customers and network out of a single 10 network. Realistically a better option was each customer gets their own locked off network and vlan level.
>> 10.0.0.0/8 runs out pretty quick
> 16,777,216 containers, wow.
Have you ever been involved in a corporate merger? IP conflicts are a huge pain point. Quite often you have to NAT with-in the company itself because the acquirer and acquiree are both using 10/8.
Discussed the other day https://news.ycombinator.com/item?id=41893200
Geoff's post was also discussed here this week: https://news.ycombinator.com/item?id=41893200
Discussion on the original weblog post this article talks about (The IPv6 Transition (potaroo.net), 224 points by todsacerdoti 3 days ago):
* https://news.ycombinator.com/item?id=41893200
(The The APNIC article is a repost of the potaroo.net article.)
Anycast and SNI mean someone like cloud flare only needs one IPv4 for their entire public facing service? Is that the gist? Obviously I exaggerate but I think that's their point?