The resistance to switch to ipv6, or the comfort with the ipv4-born address exhaustion remedies, only helps an internet of consumers, not an internet of peers that create and share. If you are behind NAT or CG-NAT, you can only consume, not create. You can't host a server, expose a port. You are at the mercy of the big fish.
It is the ISPs, that pretty much killed the IPv6 with their mishandled transition.
Where I'm, I can choose 1 out of 1 broadband provider available in the area. With this provider, I can either have a public IPv4 address (or several) with their CPE in bridge mode, or DS-Lite, with IPv4 CGNAT without PCP and /64 for the IPv6 addresses (i.e. no address space for subnets, no prefix distribution) AND having to use their router with the limited settings they allow.
With offers like these, is it any wonder that I stick with IPv4?
Are you sure about this? It’s in the rfc from like 1998 that ISPs should allow customers to sla for larger prefixes. I don’t know a single US isp that doesn’t allow at least a 56.
IPv6 is pointless and still a security risk but I’m guessing you’re misconfiguring something.
> RIPE-690 outlines best current operational practices for the assignment of IPv6 prefixes (i.e. a block of IPv6 addresses) for end-users, as making wrong choices when designing an IPv6 network will eventually have negative implications for deployment and require further effort such as renumbering when the network is already in operation. In particular, assigning IPv6 prefixes longer than /56 to residential customers is strongly discouraged, with /48 recommended for business customers. This will allow plenty of space for future expansion and sub-netting without the need for renumbering, whilst persistent prefixes (i.e. static) should be highly preferred for simplicity, stability and cost reasons.
Yes, a lot of ISPs do this even after I try to write to them explaining why it doesn't make sense. My ISP is Airtel in India, they very recently started assigning IPv6 at all but it's a single /64 only.
The other big one I know, Jio (from Reliance) also offers just a single /64.
99.99% of people who create and share things via the internet do so via centralized social media providers, and that would continue to be true if the whole world were magically IPv6-only.
I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
> I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
It's not just about self-hosting, but peer-to-peer clients as well.
When Skype originally came out it was P2P, but because of NAT they created (ran?) "super-nodes" that could do things like STUN/TURN/ICE. Wouldn't it be nice to be able to (e.g.) communicate with folks without a central authoritative server that could be warranted by various regimes?
And then there are people like myself who host publicly-available internet services from my home internet service that's absolutely behind CGNAT. That makes things a bit more hassle to get working, but it's certainly possible.
And there are different kinds of big fish. You may be in a bad neighborhood, sharing IP with misbehaved actors on the digital or real world. You may get blocked, banned or snooped because there is or was a target, an attacker or someone with bad digital hygiene.
> My ISP is IPv4 only and I host plenty of shit and punch plenty of holes. That’s a function of my firewall not how many bits are in my IP address.
Not wrong, but if you want multiple servers of the same service, you're now doing custom ports (myhost:port1, myhost:port2, etc) which isn't the end of the world, but is kind of sucky.
And if we're not talking just about servers running services, but clients that want to do peer-to-peer stuff, you also have to use things like STUN/TURN/ICE which is more infrastructure that is needed (as opposed to 'just' hole punching since your system already knows its IP(v6) address).
Given the prevalence of these technologies (kludges?) they've kind of been normalized so we think they're "fine".
That's only true if you aren't behind CG-NAT. If you are, your firewall can port forward all it wants but it won't matter, the ISP would have to also port forward to you.
Even in this situation, your ISP can port forward to you.
While not universal, some ISPs support PCP, where you can ask for a port mapping to your CGNAT-ed IP and port. They might or might not honor the external port (if it is taken, they obviously cannot), but you will get some hole punched.
Some do. But when they don't, it is not a fault of CGNAT - which does provide the capability -- but a fault of specific ISP, that's not willing to use it.
Did you miss the part about CG-NAT? Once your ISP runs out of their IP4 addresses and puts you behind a CG-NAT, you can punch all the holes you like; nothing is going to get to you.
At least not without doing fancy stuff like using an externally-hosted VPN to shuttle connections to you.
I've yet to see a single ISP (I live in the US) that even allows customers to host services. If you look in the TOS for services like Comcast, AT&T, T-Mobile, etc, you'll see a part about hosting services being forbidden. And that's even for normal IP4 addresses that aren't behind CG-NAT. Now, they probably don't look too hard unless you give them reason (I hosted various things over a Comcast connection for a decade) but the rule is in there.
Perhaps it's different for a mom & pop ISP, but I don't see the big ones configuring anything that makes it easier to do what they already don't want you doing anyway. They see the inability to forward ports as a feature, not a bug.
I'm not in US, but in EU. Here, T-Mobile or Orange do not have a problem with incoming traffic, and they know that people have security cameras, doorbells, or NAS devices in their homes that they want access from outside.
So even if you expose your Home Assistant web to the wide web, no ISP is going to have a problem with that and won't interpret it as hosting services. What they really want is that you don't run a bandwidth intensive services on a consumer connection, which is going to be overbooked somewhere in their infra, causing service degradation to other users.
And for example Orange does provide PCP for their CGNAT.
I actually made the unusual decision, last year, to go IPv6-only on a small website I operate. The reason why is that AWS changed their billing policy for public IPv4 addresses. This is a tiny website that people only access on their cell phones, so I can accept an occasional inconvenience for the marginal cost savings.
I haven't heard of anyone else doing this, but I doubt I'm completely alone in trying to minimize hosting costs.
I did this as well for 3 domains that use the same EC2. There were random connectivity problems from my phone for a couple months, but it works perfectly for the past 6 months. I have no idea why the issues went away.
I don't understand the title. What's the "but" there? IPv6 being irrelevant and moving off IPv4 being irrelevant seem like they go hand in hand, if moving off IPv4 is irrelevant then of course IPv6 is irrelevant!
Don't think about it too much, just remember that only large corporations are supposed to be peers on the internet, everyone else should be behind layers of NAT to ensure they can't become a problem.
IPv6 and Python 3 are case studies in How to Not Upgrade Something.
They basically created entirely different products that provided a marginal immediate benefit to the users and then said "upgrade whenever you get around to it". They are both now in the 2nd decade of their upgrade cycle.
PowerPC->Intel, Xbox/PlayStation emulation, x86 32-bit>64-bit, and Java are all technologies that had successful upgrade strategies that were centered around replacing the original product rather than indefinitely providing an alternative.
> IPv6 and Python 3 are case studies in How to Not Upgrade Something.
There was no other way to do it with IPv6: IPv4 has 32-bits of address space and >32 was needed for more addresses. That 32-bits is hard-coded in data structures, APIs, and even DNS formats (e.g., A records).
So regardless of anything else related to IPv6 (ARP vs ND), you would have still needed to release a bunch of code that had to be installed on every router, L3 switch, firewall, DNS server, and end device.
It was also recognized that, given the size of the Internet even in the 1990s, that a flag day like was done for the NCP->IP transition would not be possible:
We believe that it is not possible to have a "flag-day" form of
transition in which all hosts and routers must change over at
once. The size, complexity, and distributed administration of the
Internet make such a cutover impossible.
So you were always going to have to have a 'rolling upgrade' to get a larger address space. You were always going to have translation systems.
It was also recognized that the 'legacy' may never go away:
Furthermore, we note that, in all probability, there will be IPv4
hosts on the Internet effectively forever. IPng must provide
mechanisms to allow these hosts to communicate, even after IPng
has become the dominant network layer protocol in the Internet.
I won't be so brazen as to suggest I could have done any better, but the claim that never actually completing the upgrade was the only way is a bit too hand wavy I think.
> […] but the claim that never actually completing the upgrade was the only way is a bit too hand wavy I think.
Is it accurate to think of it as an "upgrade" versus "addition"? It's not like HTTP 1.1 went away just because HTTP 3/QUIC came around.
HTTP 3 may have certain useful features, but lots of folks do need/care about them and so may never activate it (at least on purpose, unless Apache/Nginx have them default-on). This thinking may not add much burden to the rest of the Internet for that protocol.
Whereas not supporting IPv6 can add burdens to others:
> Our [American Indian] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
I haven't moved my systems to IPv6, and have no current plans to do so, because it's a pretty major change (meaning a ton of hassle) that brings me no benefits that I care about.
If/when IPv6 becomes mandatory to connect to the internet, I'll go to the trouble of shifting my systems.
The argument is somewhat thin "CDNs use DNS so it doesn't matter what the IP is"
I mean yes, that true, but it should have always been true. Dishing out raw IPs is bad anyway, it limits your flexibility (yes yes anycast exists, but if you're big enough to setup any cast, I bet you're using ipv6 internally already)
Ipv6 is here, and will slowly grow as time goes on. There will be growing pains, but its plain cheaper to run at any kind of scale. (especially now AWS are charging for public IPv4 addresses)
If you are hosting thousands of servers, and you haven't drunken the batshit K8s networking schema, then ipv6 becomes really rather practical, especially if you are giving out unique addresses to containers 10.0.0.0/8 runs out pretty quick.
Incidentally, if you want to play with anycast on the public Internet, BuyVM[1] will let you do that on $10.50/mo (3×$3.50/mo very resource-limited VPSes). Catching those VPSes when they’re in stock is something of an ordeal, though.
Except it's not 16 million containers. You need addressing for every link in the network. Want 10 racks of 20 servers with redundant switching in each, redundant spines in the network rack? That's 60 physical links, each of which need two addresses, and that's just the fabric underlay network. You haven't created the actual overlay networks yet, which you need multiple of (front end, back end, database, storage...).
Consider that 200 servers is a drop in the bucket at some scales, you can see why data centre is moving to V6 only.
Have you ever been involved in a corporate merger? IP conflicts are a huge pain point. Quite often you have to NAT with-in the company itself because the acquirer and acquiree are both using 10/8.
This was literally why we went IPv6 in a previous company. We were acquiring other companies like crazy and it and the conflicts were constant pain where we'd either have to re-IP a location (Active Directory does not like) or do internal facing NATs and DNS weirdness (AD also not a big fan).
We quickly discovered it was easier to get the new location up and running on IPv6 and mesh that so all inter-office traffic was IPv6 rather than resolving the conflicts. Sure you couldn't reach the printer in Boise from New York because it was IPv4 only, but for the stuff normal users were doing it worked great.
So, if you have a sane ipv4 network, where everything is dhcp, and nothing apart from a few key things are statically assigned, then yeah _technically_ you can have 16million addresses all at once.
But, subnets need to be located next to each other physically, otherwise performance suffers. subnets have affinity.
but once you have subnets, you then start loosing packing efficiency.
for example, in the batshit world of K8s, you give each node its own /24 to dish out. Not only cant that limit the number of containers you can host, it also is really inefficient. (eating 256k addresses)
More over, it also means that you need to reuse addresses. in a large cluster of say 1000 nodes, each hosting 40 containers, starting/stopping anything up to 30 containers a second isn't unreasonable. Its not inconceivable that you'll end up trying to connect to a stale address (either because its not propagated yet, or your brand of service discovery isn't that fast). This can cause hilarious transitory errors.
but if you could assign an IP per container, and have enough space to not re-use that address for at least a few hours then that goes away. so instead of getting weird fuzzing errors(or misc 404/401), you get a connection timed out.
I am going to give them the benefit of the doubt. Maybe they are thinking it would probably run out due to poor segmentation? It would be quite the interesting problem of network segmentation to run your whole service provider with thousands of customers and network out of a single 10 network. Realistically a better option was each customer gets their own locked off network and vlan level.
Felt like the article went off into lala land towards the end:
“The last couple of decades have seen us stripping out network-centric functionality and replacing this with an undistinguished commodity packet transport medium. It's fast and cheap, but it's up to applications to overlay this common basic service with its own requirements." The result is networks become "simple dumb pipes!"
Given that, Huston wonders if it's time to revisit the definition of the internet as networks that use a common shared transmission fabric, a common suite of protocols and a common protocol address pool.
Rather, he posits "Is today's network more like 'a disparate collection of services that share common referential mechanisms using a common namespace?'"
Lots of consumer router/gateways out there are old and don't have IPv6 turned on. If you turn off IPv4 for your services, those folks just won't be able to connect and they won't know why, they'll just move on. Any who want to complain won't even know who to complain to, and they won't be able to fix it themselves. That's bad. And yes, you could put a CDN in front of everything, but won't that wipe out some/all of the savings (assuming you're with a cloud provider that's charging for IPv4)? This basically explains why I put the brakes on our IPv6 migrations. From our clients' perspective it would have been an unforced error.
Anycast and SNI mean someone like cloud flare only needs one IPv4 for their entire public facing service? Is that the gist? Obviously I exaggerate but I think that's their point?
The resistance to switch to ipv6, or the comfort with the ipv4-born address exhaustion remedies, only helps an internet of consumers, not an internet of peers that create and share. If you are behind NAT or CG-NAT, you can only consume, not create. You can't host a server, expose a port. You are at the mercy of the big fish.
It is the ISPs, that pretty much killed the IPv6 with their mishandled transition.
Where I'm, I can choose 1 out of 1 broadband provider available in the area. With this provider, I can either have a public IPv4 address (or several) with their CPE in bridge mode, or DS-Lite, with IPv4 CGNAT without PCP and /64 for the IPv6 addresses (i.e. no address space for subnets, no prefix distribution) AND having to use their router with the limited settings they allow.
With offers like these, is it any wonder that I stick with IPv4?
Are you sure about this? It’s in the rfc from like 1998 that ISPs should allow customers to sla for larger prefixes. I don’t know a single US isp that doesn’t allow at least a 56.
IPv6 is pointless and still a security risk but I’m guessing you’re misconfiguring something.
Yup, Liberty Global (also known as UPC) in Europe.
Assigning only /64 & no DHCP-PD. There's not much to misconfigure, since in IPv6 you have to use their router and they are pushing the config.
And since you have only /64, you cannot put another router behind theirs.
Which of course goes against what RIPE is saying:
> The following sections explain why /48 and /56 are the recommended prefix assignment sizes for end customers.
* https://www.ripe.net/publications/docs/ripe-690/#4-2--prefix...
And it's not like it's a new policy:
> RIPE-690 outlines best current operational practices for the assignment of IPv6 prefixes (i.e. a block of IPv6 addresses) for end-users, as making wrong choices when designing an IPv6 network will eventually have negative implications for deployment and require further effort such as renumbering when the network is already in operation. In particular, assigning IPv6 prefixes longer than /56 to residential customers is strongly discouraged, with /48 recommended for business customers. This will allow plenty of space for future expansion and sub-netting without the need for renumbering, whilst persistent prefixes (i.e. static) should be highly preferred for simplicity, stability and cost reasons.
* https://www.internetsociety.org/blog/2017/10/ipv6-prefix-ass...
Yes, a lot of ISPs do this even after I try to write to them explaining why it doesn't make sense. My ISP is Airtel in India, they very recently started assigning IPv6 at all but it's a single /64 only.
The other big one I know, Jio (from Reliance) also offers just a single /64.
99.99% of people who create and share things via the internet do so via centralized social media providers, and that would continue to be true if the whole world were magically IPv6-only.
I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
> I think it’d be nice to self-host things to, but it’s inaccurate and even a bit insulting to claim that the millions of people creating content on the internet today don’t exist.
It's not just about self-hosting, but peer-to-peer clients as well.
When Skype originally came out it was P2P, but because of NAT they created (ran?) "super-nodes" that could do things like STUN/TURN/ICE. Wouldn't it be nice to be able to (e.g.) communicate with folks without a central authoritative server that could be warranted by various regimes?
I agree! I was just taking issue with the overly broad claim that being behind NAT only lets you consume, not create.
And then there are people like myself who host publicly-available internet services from my home internet service that's absolutely behind CGNAT. That makes things a bit more hassle to get working, but it's certainly possible.
And there are different kinds of big fish. You may be in a bad neighborhood, sharing IP with misbehaved actors on the digital or real world. You may get blocked, banned or snooped because there is or was a target, an attacker or someone with bad digital hygiene.
My ISP is IPv4 only and I host plenty of shit and punch plenty of holes. That’s a function of my firewall not how many bits are in my IP address.
> My ISP is IPv4 only and I host plenty of shit and punch plenty of holes. That’s a function of my firewall not how many bits are in my IP address.
Not wrong, but if you want multiple servers of the same service, you're now doing custom ports (myhost:port1, myhost:port2, etc) which isn't the end of the world, but is kind of sucky.
And if we're not talking just about servers running services, but clients that want to do peer-to-peer stuff, you also have to use things like STUN/TURN/ICE which is more infrastructure that is needed (as opposed to 'just' hole punching since your system already knows its IP(v6) address).
Given the prevalence of these technologies (kludges?) they've kind of been normalized so we think they're "fine".
That's only true if you aren't behind CG-NAT. If you are, your firewall can port forward all it wants but it won't matter, the ISP would have to also port forward to you.
Even in this situation, your ISP can port forward to you.
While not universal, some ISPs support PCP, where you can ask for a port mapping to your CGNAT-ed IP and port. They might or might not honor the external port (if it is taken, they obviously cannot), but you will get some hole punched.
> your ISP can port forward to you
But will they? Domestic ISPs are pretty hesitant to offer such, or anything at that manner.
Some do. But when they don't, it is not a fault of CGNAT - which does provide the capability -- but a fault of specific ISP, that's not willing to use it.
You can’t punch any holes through carrier-grade NAT (CGNAT).
You can, if your ISP cooperates, using PCP.
Frankly, you lost me at "if your ISP cooperates".
It is a function of the CGNAT at the ISP side. They need to have that enabled. Some do.
Did you miss the part about CG-NAT? Once your ISP runs out of their IP4 addresses and puts you behind a CG-NAT, you can punch all the holes you like; nothing is going to get to you.
At least not without doing fancy stuff like using an externally-hosted VPN to shuttle connections to you.
The GP has both versions, not just CGNAT (which would have made their comment less nonsensical):
> If you are behind NAT or CG-NAT
People seem to have misconceptions about CGNAT.
Of course you can punch holes there. CGNATs can be asked for port forwarding using PCP, unless your ISP disabled that.
I've yet to see a single ISP (I live in the US) that even allows customers to host services. If you look in the TOS for services like Comcast, AT&T, T-Mobile, etc, you'll see a part about hosting services being forbidden. And that's even for normal IP4 addresses that aren't behind CG-NAT. Now, they probably don't look too hard unless you give them reason (I hosted various things over a Comcast connection for a decade) but the rule is in there.
Perhaps it's different for a mom & pop ISP, but I don't see the big ones configuring anything that makes it easier to do what they already don't want you doing anyway. They see the inability to forward ports as a feature, not a bug.
I'm not in US, but in EU. Here, T-Mobile or Orange do not have a problem with incoming traffic, and they know that people have security cameras, doorbells, or NAS devices in their homes that they want access from outside.
So even if you expose your Home Assistant web to the wide web, no ISP is going to have a problem with that and won't interpret it as hosting services. What they really want is that you don't run a bandwidth intensive services on a consumer connection, which is going to be overbooked somewhere in their infra, causing service degradation to other users.
And for example Orange does provide PCP for their CGNAT.
I actually made the unusual decision, last year, to go IPv6-only on a small website I operate. The reason why is that AWS changed their billing policy for public IPv4 addresses. This is a tiny website that people only access on their cell phones, so I can accept an occasional inconvenience for the marginal cost savings.
I haven't heard of anyone else doing this, but I doubt I'm completely alone in trying to minimize hosting costs.
IF you need IPv4 for some users like me where ISPs is IPv4 only then you can use a CloudFlare or CloudFront and expose IPv4 too.
I did this as well for 3 domains that use the same EC2. There were random connectivity problems from my phone for a couple months, but it works perfectly for the past 6 months. I have no idea why the issues went away.
On the other hand, same as you I tried IPv6-only to reduce costs but I couldn't load my website at all on my mobile.
I don't understand the title. What's the "but" there? IPv6 being irrelevant and moving off IPv4 being irrelevant seem like they go hand in hand, if moving off IPv4 is irrelevant then of course IPv6 is irrelevant!
Don't think about it too much, just remember that only large corporations are supposed to be peers on the internet, everyone else should be behind layers of NAT to ensure they can't become a problem.
IPv6 and Python 3 are case studies in How to Not Upgrade Something.
They basically created entirely different products that provided a marginal immediate benefit to the users and then said "upgrade whenever you get around to it". They are both now in the 2nd decade of their upgrade cycle.
PowerPC->Intel, Xbox/PlayStation emulation, x86 32-bit>64-bit, and Java are all technologies that had successful upgrade strategies that were centered around replacing the original product rather than indefinitely providing an alternative.
> IPv6 and Python 3 are case studies in How to Not Upgrade Something.
There was no other way to do it with IPv6: IPv4 has 32-bits of address space and >32 was needed for more addresses. That 32-bits is hard-coded in data structures, APIs, and even DNS formats (e.g., A records).
So regardless of anything else related to IPv6 (ARP vs ND), you would have still needed to release a bunch of code that had to be installed on every router, L3 switch, firewall, DNS server, and end device.
It was also recognized that, given the size of the Internet even in the 1990s, that a flag day like was done for the NCP->IP transition would not be possible:
* https://datatracker.ietf.org/doc/html/rfc1726#section-5.5So you were always going to have to have a 'rolling upgrade' to get a larger address space. You were always going to have translation systems.
It was also recognized that the 'legacy' may never go away:
* IbidI won't be so brazen as to suggest I could have done any better, but the claim that never actually completing the upgrade was the only way is a bit too hand wavy I think.
> […] but the claim that never actually completing the upgrade was the only way is a bit too hand wavy I think.
Is it accurate to think of it as an "upgrade" versus "addition"? It's not like HTTP 1.1 went away just because HTTP 3/QUIC came around.
HTTP 3 may have certain useful features, but lots of folks do need/care about them and so may never activate it (at least on purpose, unless Apache/Nginx have them default-on). This thinking may not add much burden to the rest of the Internet for that protocol.
Whereas not supporting IPv6 can add burdens to others:
> Our [American Indian] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
Getting people to follow new standards and follow best practices can be more difficult than herding cats.
I think this describes very well what's happened.
I haven't moved my systems to IPv6, and have no current plans to do so, because it's a pretty major change (meaning a ton of hassle) that brings me no benefits that I care about.
If/when IPv6 becomes mandatory to connect to the internet, I'll go to the trouble of shifting my systems.
Discussion on the original weblog post this article talks about (The IPv6 Transition (potaroo.net), 224 points by todsacerdoti 3 days ago):
* https://news.ycombinator.com/item?id=41893200
(The The APNIC article is a repost of the potaroo.net article.)
Previously:
• https://news.ycombinator.com/item?id=41893200
– https://www.potaroo.net/ispcol/2024-10/ipv6-transition.html
– The IPv6 Transition
– (224 points / 416 comments)
I mean its an opinion.
The argument is somewhat thin "CDNs use DNS so it doesn't matter what the IP is"
I mean yes, that true, but it should have always been true. Dishing out raw IPs is bad anyway, it limits your flexibility (yes yes anycast exists, but if you're big enough to setup any cast, I bet you're using ipv6 internally already)
Ipv6 is here, and will slowly grow as time goes on. There will be growing pains, but its plain cheaper to run at any kind of scale. (especially now AWS are charging for public IPv4 addresses)
If you are hosting thousands of servers, and you haven't drunken the batshit K8s networking schema, then ipv6 becomes really rather practical, especially if you are giving out unique addresses to containers 10.0.0.0/8 runs out pretty quick.
> if you're big enough to setup anycast
Incidentally, if you want to play with anycast on the public Internet, BuyVM[1] will let you do that on $10.50/mo (3×$3.50/mo very resource-limited VPSes). Catching those VPSes when they’re in stock is something of an ordeal, though.
[1] https://buyvm.net/anycast-vps/
> 10.0.0.0/8 runs out pretty quick
16,777,216 containers, wow.
Except it's not 16 million containers. You need addressing for every link in the network. Want 10 racks of 20 servers with redundant switching in each, redundant spines in the network rack? That's 60 physical links, each of which need two addresses, and that's just the fabric underlay network. You haven't created the actual overlay networks yet, which you need multiple of (front end, back end, database, storage...).
Consider that 200 servers is a drop in the bucket at some scales, you can see why data centre is moving to V6 only.
>> 10.0.0.0/8 runs out pretty quick
> 16,777,216 containers, wow.
Have you ever been involved in a corporate merger? IP conflicts are a huge pain point. Quite often you have to NAT with-in the company itself because the acquirer and acquiree are both using 10/8.
This was literally why we went IPv6 in a previous company. We were acquiring other companies like crazy and it and the conflicts were constant pain where we'd either have to re-IP a location (Active Directory does not like) or do internal facing NATs and DNS weirdness (AD also not a big fan).
We quickly discovered it was easier to get the new location up and running on IPv6 and mesh that so all inter-office traffic was IPv6 rather than resolving the conflicts. Sure you couldn't reach the printer in Boise from New York because it was IPv4 only, but for the stuff normal users were doing it worked great.
So, if you have a sane ipv4 network, where everything is dhcp, and nothing apart from a few key things are statically assigned, then yeah _technically_ you can have 16million addresses all at once.
But, subnets need to be located next to each other physically, otherwise performance suffers. subnets have affinity.
but once you have subnets, you then start loosing packing efficiency.
for example, in the batshit world of K8s, you give each node its own /24 to dish out. Not only cant that limit the number of containers you can host, it also is really inefficient. (eating 256k addresses)
More over, it also means that you need to reuse addresses. in a large cluster of say 1000 nodes, each hosting 40 containers, starting/stopping anything up to 30 containers a second isn't unreasonable. Its not inconceivable that you'll end up trying to connect to a stale address (either because its not propagated yet, or your brand of service discovery isn't that fast). This can cause hilarious transitory errors.
but if you could assign an IP per container, and have enough space to not re-use that address for at least a few hours then that goes away. so instead of getting weird fuzzing errors(or misc 404/401), you get a connection timed out.
I am going to give them the benefit of the doubt. Maybe they are thinking it would probably run out due to poor segmentation? It would be quite the interesting problem of network segmentation to run your whole service provider with thousands of customers and network out of a single 10 network. Realistically a better option was each customer gets their own locked off network and vlan level.
Felt like the article went off into lala land towards the end:
“The last couple of decades have seen us stripping out network-centric functionality and replacing this with an undistinguished commodity packet transport medium. It's fast and cheap, but it's up to applications to overlay this common basic service with its own requirements." The result is networks become "simple dumb pipes!"
Given that, Huston wonders if it's time to revisit the definition of the internet as networks that use a common shared transmission fabric, a common suite of protocols and a common protocol address pool.
Rather, he posits "Is today's network more like 'a disparate collection of services that share common referential mechanisms using a common namespace?'"
Lots of consumer router/gateways out there are old and don't have IPv6 turned on. If you turn off IPv4 for your services, those folks just won't be able to connect and they won't know why, they'll just move on. Any who want to complain won't even know who to complain to, and they won't be able to fix it themselves. That's bad. And yes, you could put a CDN in front of everything, but won't that wipe out some/all of the savings (assuming you're with a cloud provider that's charging for IPv4)? This basically explains why I put the brakes on our IPv6 migrations. From our clients' perspective it would have been an unforced error.
Geoff's post was also discussed here this week: https://news.ycombinator.com/item?id=41893200
Anycast and SNI mean someone like cloud flare only needs one IPv4 for their entire public facing service? Is that the gist? Obviously I exaggerate but I think that's their point?