That doesn't really make it any better, if you ask me.
The entire Isolated Web Apps proposal is a massive breakdown of the well-established boundaries provided by browsers. Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download. The latter is heavily enforced by both Chrome and Windows complaining quite a bit if you're trying to run downloaded executables - especially unsigned ones. If you follow those two basic things, websites cannot hurt your machine.
IWA seems to be turning this upside-down. Chrome is essentially completely bypassing all protections the OS has added, and allowing Magically Flagged Websites to do all sorts of dangerous stuff on your computer. No matter what kind of UX they provide, it is going to be nigh-on impossible to explain to people that websites are now suddenly able to do serious harm to your local network.
Browsers should not be involved in this. They are intended to run untrusted code. No browser should be allowed to randomly start executing third-party code as if it is trustworthy, that's not what browsers are for. It's like the FDA suddenly allowing rat poison into food products - provided you inform consumers by adding it to the ingredients list of course.
> Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download
I think you're severely overestimating the things every user knows.
Does it help to think of it less as Chrome allowing websites to do XYZ, and more as a PWA API for offering to install full-fat browser-wrapper OS apps (like the Electron kind) — where these apps just so happen to “borrow” the runtime of the browser they were installed with, rather than shipping with (and thus having to update) their own?
Unfortunately this is the future. Handing the world wide webs future to Google was a mistake, and the only remedy is likely to come from an (unlikely) antitrust breakup or divestment.
Interesting — the Firefox team’s response was very negative, but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app (as opposed to being an API available to any website).
In reading their comments, I also felt the API was a bad idea. Especially when technology like Electron or Tauri exist, which can do those TCP or UDP connections. But IWA serves to displace Electron, I guess
I'm hacking on a Tauri web app that needs to bridge to talking UDP protocols literally as we speak.
While Tauri seems better than ever for cross platform native apps, it's still a huge step to take to allow my web app access to lower level. Rust toolchain, Tauri plugins, sidecar processes, code gen, JSON RPC, all to let my web app talk to my network.
Seems great that Chrome continues to bundle these pieces into the browser engine itself.
Direct sockets plus WASM could eat a lot of software...
I've built my app[1] using Qt (C++ and QML), and I think the UI looks decent. There's still a long way for it to feel truly native, but I've got some cool ideas.
What's your recommendation? I've tried so many multiplatform toolkits (including GTK, Qt, wxWidgets, Iced, egui, imgui, and investigated slint and sciter) and nothing has come close to the speed of dev and small final app size of something like Tauri+Svelte.
I think a lot of people don't realize it's possible to use UDP in browsers today with WebRTC DataChannel. I have a demo of multiplayer Quake III using peer-to-peer UDP here: https://thelongestyard.link/
Direct sockets will have their uses for compatibility with existing applications, but it's possible to do almost any kind of networking you want on the web if you control both sides of the connection.
> Direct sockets will have their uses for compatibility with existing applications...
In fact runtimes like Node, Deno, Cloudflare Workers, Fastly Compute, Bun et al run JS on servers, and will benefit from standardization of such features.
[WICG] aims to provide a space for JavaScript runtimes to collaborate on API interoperability. We focus on documenting and improving interoperability of web platform APIs across runtimes (especially non-browser ones).
This slowly alters the essence of The Internet, due to the permissionless nature of running any self-organising system like Bittorrent and Bitcoin. This is NOT in Android, just isolated Web Apps at desktops at this stage[0]. The "direct socket access" creep moves forward again. First, IoT without any security standards. Now Web Apps.
With direct socket access to TCP/UDP you can build anything! You loose the constraint of JS servers, costly WebRTC server hosting, and lack of listen sockets feature in WebRTC DataChannel.
<self promotion>NAT puncturing is already solved in our lab, even for mobile 4G/5G. This might bring back the cyberpunk dreams of Peer2Peer... In our lab we bought 40+ SIM cards for the big EU 4G/5G networks and got the carrier-grade NAT puncturing working[1]. Demo blends 4G/5G puncturing, TikTok-style streaming, and Bittorrent content backend. Reading the docs, these "isolated" Web Apps can even do SMTP STARTTLS, IMAP STARTTLS and POP STLS. wow!
Without a middleman you can only use web socket to connect to an http server.
So, for instance if I want to connect to an mqtt server from a webpage I have to use a server that supports websocket endpoint. With direct sockets I could connect to any server using any protocol
WebRTC depends on some message transport (using http) existing first between peers before the data channel can be established . That's far from equivalent capability to direct sockets.
Yes, you do need a connection establishment server, but in most cases traffic can flow directly between peers after connection establishment. The reality of the modern internet is even with native sockets many if not most peers will not be able to establish a direct peer-to-peer connection without the involvement of a connection establishment server anyway due to firewalls, NAT, etc. So it's not as big of a downgrade as you might think.
That changed (ahm.. will change) with ipv6. I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat. This remains true even with abusive isps that only give out /64 blocks.
That said, I agree that peer to peer will never be seemless thanks mostly to said abusive isps.
I sure hope not, this will bring in a new era for internet worms.
If some ISPs are not currently firewalling all incoming IPv6 connections, it's a major security risk. I hope some security researcher raises boise about that soon, and the firewalls will go closed by default.
My home router seems to have a stateful firewall and so does my cellphone in tethering mode - I don't know whether that one's implemented on the phone (under my control) or the network.
Firewalling goes back in the control of the user in most cases - the other day we on IRC told someone how to unblock port 80 on their home router.
IPv6 isn't going to happen. Most people's needs are met by NAT for clients and SNI routing for servers. We ran out of IPv4 addresses years ago. If it was actually a problem it would have happened then. It makes me said for the p2p internet but it's true.
It became a problem precisely the moment AWS starting charging for ipv4 addresses.
"IPv4 will cost our company X dollars in 2026, supporting IPv6 by 2026 will cost Y dollars, a Z% saving"
There's now a tangible motivator for various corporate systems to at least support ipv6 everywhere - which was the real ipv6 impediment.
Residential ISP appear to be very capable of moving to v6, there are lots of examples of that happening in their backends, and they've demonstrated already that they're plenty capable of giving end users boxes the just so happen to do ipv6.
Is that a problem? Again, I'm talking about the scenario where you control both sides of the connection, not where you're trying to use UDP to communicate with a third party service.
Hmm, I bet the problem is my code expects touch events instead of mouse events when a touchscreen is present. Unfortunately I don't have a computer with both touchscreen and mouse here to test with so I didn't test that case. I did implement both gamepad and touch controls, so you could try them to see if they work.
Yeah that issue I have seen, but unfortunately haven't been able to debug yet as it isn't very reproducible and usually stops happening under a debugger.
tbh the WebRTC performance is basically the same network performance as websockets and was way more complicated to implement. Maybe the webrtc perf is better in other parts of the world or something...
Yeah WebRTC is a bear to implement for sure. Very poorly designed API. It can definitely provide significant performance improvements over web sockets, but only when configured correctly (unordered/unreliable mode) and not in every case (peer-to-peer is an afterthought in the modern internet).
We got it in unreliable/unordered and it still barely moves the needle on network perf over websockets from what we see in north america connecting to another server in north america
I wouldn't expect a big improvement in average performance but the long tail of high latency cases should be improved by avoiding head-of-line blocking. Also peer-to-peer should be an improvement over client-server-client in some situations. Not for battle royale though I guess.
Edit: Very cool game! I love instant loading web games and yours seems very polished and fun to play. Has the web version been profitable, or is most of your revenue from the app stores? I wish I better understood the reasons web games (reportedly) struggle to monetize.
I mean, the only cases where UDP vs. TCP are going to matter are 1) if you experience packet loss (and maybe you aren't for whatever reason) and 2) if you are willing to actively try to shove other protocols around and not have a congestion controller (and WebRTC definitely has a congestion controller, with the default in most implementations being an algorithm about as good as a low-quality TCP stack).
Status of specification: "It is not a W3C Standard nor is it on the W3C Standards Track."
Status in Chrome: shipping in 131
Expect people claiming this is a vital standard that Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ uncritically just include this
I prefer web apps to native apps any day. However, web apps are limited by what they can do.
But what they can do is not consistent - for example, it can take your picture and listen to your microphone if you give permissions; but it can't open a socket. Another example: Chrome came out with an File System Access API [2] in August; it's fantastic (I am using it) and it allows a class of native apps to be replaced by Web Apps. As a user, I don't mind having to jump through hoops (as a user) and giant warning screens to accept that permission - but I want this ability on the Web Platform.
For Web Apps to be able to complete with native apps, we need more flexibility Mozilla. [1]
I’m excited, and anticipate some interesting innovation once browser applications can “talk UDP”. It’s a long time in the making. Gaming isn’t the end of it — being able to communicate with local network services (hardware) without involving an API intervening is very attractive.
Since it allows for accepting incoming TCP connections, this should allow for HTTP servers to run within the browser, although running directly on port 80/443 might not be supported everywhere (can't see it mentioned in the spec, but from what I remember on most *nix systems only root can listen on ports below 1024, though I might be mistaken since it's been a while)
Anything that moves the web closer to its natural end state— the J(S)VM is a win in my book. Making web apps a formally separate thing from pages might do some good for the web overall. We could start thinking about taking away features from the page side.
Great, so now a mis-click and your browser will have a field day infecting your printer, coffee machine and all the other crap that was previously shielded by NAT and/or a firewall.
As long as they don't change the spec, this will only be available to special locally installed apps in enterprise ChromeOS environments. I don't think their latest weird app format is going to make it to other browsers, so this will remain one of those weird Chrome only APIs that nobody uses.
When reading https://github.com/WICG/direct-sockets/blob/main/docs%2Fexpl..., it's noted this is part of the "isolated web apps" proposal: https://github.com/WICG/isolated-web-apps/blob/main/README.m... , which is important context because the obvious reaction to this is the security nightmare
That doesn't really make it any better, if you ask me.
The entire Isolated Web Apps proposal is a massive breakdown of the well-established boundaries provided by browsers. Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download. The latter is heavily enforced by both Chrome and Windows complaining quite a bit if you're trying to run downloaded executables - especially unsigned ones. If you follow those two basic things, websites cannot hurt your machine.
IWA seems to be turning this upside-down. Chrome is essentially completely bypassing all protections the OS has added, and allowing Magically Flagged Websites to do all sorts of dangerous stuff on your computer. No matter what kind of UX they provide, it is going to be nigh-on impossible to explain to people that websites are now suddenly able to do serious harm to your local network.
Browsers should not be involved in this. They are intended to run untrusted code. No browser should be allowed to randomly start executing third-party code as if it is trustworthy, that's not what browsers are for. It's like the FDA suddenly allowing rat poison into food products - provided you inform consumers by adding it to the ingredients list of course.
> Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download
I think you're severely overestimating the things every user knows.
The last time I used Chrome was about 3 years ago. You have a choice.
Does it help to think of it less as Chrome allowing websites to do XYZ, and more as a PWA API for offering to install full-fat browser-wrapper OS apps (like the Electron kind) — where these apps just so happen to “borrow” the runtime of the browser they were installed with, rather than shipping with (and thus having to update) their own?
Unfortunately this is the future. Handing the world wide webs future to Google was a mistake, and the only remedy is likely to come from an (unlikely) antitrust breakup or divestment.
I doubt websites as we know it will be what we’ll be dealing with going forward anyways.
What is a browser if we just digest all the HTML and spit out clean text in the long run?
We handed over something of some value I guess, once upon a time.
Interesting — the Firefox team’s response was very negative, but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app (as opposed to being an API available to any website).
In reading their comments, I also felt the API was a bad idea. Especially when technology like Electron or Tauri exist, which can do those TCP or UDP connections. But IWA serves to displace Electron, I guess
I'm hacking on a Tauri web app that needs to bridge to talking UDP protocols literally as we speak.
While Tauri seems better than ever for cross platform native apps, it's still a huge step to take to allow my web app access to lower level. Rust toolchain, Tauri plugins, sidecar processes, code gen, JSON RPC, all to let my web app talk to my network.
Seems great that Chrome continues to bundle these pieces into the browser engine itself.
Direct sockets plus WASM could eat a lot of software...
with so many multiplatform gui toolkits today, tauri and electron are really bad choices
The cross platform desktop gui toolkits all have some very big downsides and tend to result in bad looking UIs too.
I've built my app[1] using Qt (C++ and QML), and I think the UI looks decent. There's still a long way for it to feel truly native, but I've got some cool ideas.
[1] https://get-notes.com/
What's your recommendation? I've tried so many multiplatform toolkits (including GTK, Qt, wxWidgets, Iced, egui, imgui, and investigated slint and sciter) and nothing has come close to the speed of dev and small final app size of something like Tauri+Svelte.
I've also tried Flutter, React Native, Kotlin multiplatform, Wails.
I'm landing on Svelte and Tauri too.
The other alternative I dabble with is using the Android Studio, XCode to write my own WebView wrappers.
> but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app
That’s what the Narrower Applicability section is about <https://github.com/mozilla/standards-positions/issues/431#is...>. It exposes new vulnerabilities because of IP address reuse across networks, and DNS rebinding.
I think a lot of people don't realize it's possible to use UDP in browsers today with WebRTC DataChannel. I have a demo of multiplayer Quake III using peer-to-peer UDP here: https://thelongestyard.link/
Direct sockets will have their uses for compatibility with existing applications, but it's possible to do almost any kind of networking you want on the web if you control both sides of the connection.
> Direct sockets will have their uses for compatibility with existing applications...
In fact runtimes like Node, Deno, Cloudflare Workers, Fastly Compute, Bun et al run JS on servers, and will benefit from standardization of such features.
https://wintercg.org/This slowly alters the essence of The Internet, due to the permissionless nature of running any self-organising system like Bittorrent and Bitcoin. This is NOT in Android, just isolated Web Apps at desktops at this stage[0]. The "direct socket access" creep moves forward again. First, IoT without any security standards. Now Web Apps.
With direct socket access to TCP/UDP you can build anything! You loose the constraint of JS servers, costly WebRTC server hosting, and lack of listen sockets feature in WebRTC DataChannel.
<self promotion>NAT puncturing is already solved in our lab, even for mobile 4G/5G. This might bring back the cyberpunk dreams of Peer2Peer... In our lab we bought 40+ SIM cards for the big EU 4G/5G networks and got the carrier-grade NAT puncturing working[1]. Demo blends 4G/5G puncturing, TikTok-style streaming, and Bittorrent content backend. Reading the docs, these "isolated" Web Apps can even do SMTP STARTTLS, IMAP STARTTLS and POP STLS. wow!
[0] https://github.com/WICG/direct-sockets/blob/main/docs/explai... [1] https://repository.tudelft.nl/record/uuid:cf27f6d4-ca0b-4e20...
Can you explain further... how does this improve upon websockets and socketIO for node?
Without a middleman you can only use web socket to connect to an http server.
So, for instance if I want to connect to an mqtt server from a webpage I have to use a server that supports websocket endpoint. With direct sockets I could connect to any server using any protocol
You can also use WebTransport with streams for tcp and datagramms for udp https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...
WebRTC depends on some message transport (using http) existing first between peers before the data channel can be established . That's far from equivalent capability to direct sockets.
Yes, you do need a connection establishment server, but in most cases traffic can flow directly between peers after connection establishment. The reality of the modern internet is even with native sockets many if not most peers will not be able to establish a direct peer-to-peer connection without the involvement of a connection establishment server anyway due to firewalls, NAT, etc. So it's not as big of a downgrade as you might think.
That changed (ahm.. will change) with ipv6. I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat. This remains true even with abusive isps that only give out /64 blocks.
That said, I agree that peer to peer will never be seemless thanks mostly to said abusive isps.
I sure hope not, this will bring in a new era for internet worms.
If some ISPs are not currently firewalling all incoming IPv6 connections, it's a major security risk. I hope some security researcher raises boise about that soon, and the firewalls will go closed by default.
My home router seems to have a stateful firewall and so does my cellphone in tethering mode - I don't know whether that one's implemented on the phone (under my control) or the network.
Firewalling goes back in the control of the user in most cases - the other day we on IRC told someone how to unblock port 80 on their home router.
it kinda of already begun
Has there been a big ipv6 worm? I thought that the defense against worms was that scanning the address space was impractical due to the large size.
> I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat
No NAT, sure, that's great. But no firewalls? That's not great. Lots of misconfigured networks waiting for the right malware to come by...
IPv6 isn't going to happen. Most people's needs are met by NAT for clients and SNI routing for servers. We ran out of IPv4 addresses years ago. If it was actually a problem it would have happened then. It makes me said for the p2p internet but it's true.
> If it was actually a problem
It became a problem precisely the moment AWS starting charging for ipv4 addresses.
"IPv4 will cost our company X dollars in 2026, supporting IPv6 by 2026 will cost Y dollars, a Z% saving"
There's now a tangible motivator for various corporate systems to at least support ipv6 everywhere - which was the real ipv6 impediment.
Residential ISP appear to be very capable of moving to v6, there are lots of examples of that happening in their backends, and they've demonstrated already that they're plenty capable of giving end users boxes the just so happen to do ipv6.
What do you mean not going to happen? It's already happening. It's about 45% of internet packets.
Not only that, but DTLS is mandated for any UDP connections.
Is that a problem? Again, I'm talking about the scenario where you control both sides of the connection, not where you're trying to use UDP to communicate with a third party service.
I think all three comments including mine are essentially saying the same but in different viewpoints.
Longest Yard is my favorite Q3 map, but for some reason I cannot use my mouse (?) in your version of the Quake 3 demo.
Interesting, what browser and OS?
I can't use mouse either, macos/Chrome. Otherwise, cool!
Brave browser (Chromium via Flatpak) on the Steam Deck (Arch Linux) in Desktop mode with bluetooth connected mouse/keyboard.
Hmm, I bet the problem is my code expects touch events instead of mouse events when a touchscreen is present. Unfortunately I don't have a computer with both touchscreen and mouse here to test with so I didn't test that case. I did implement both gamepad and touch controls, so you could try them to see if they work.
Same browser on win10. Mouse works after you click in the window and it goes full screen. However, it hangs after a few seconds of game play.
Stopped hanging... then input locks up somehow.
Switched to chrome on win10, same issue: input locks up after a bit.
Yeah that issue I have seen, but unfortunately haven't been able to debug yet as it isn't very reproducible and usually stops happening under a debugger.
Works in Firefox, on the same system.
Yeah we use WebRTC for our games built on a fork of Godot 3.
https://gooberdash.winterpixel.io/
tbh the WebRTC performance is basically the same network performance as websockets and was way more complicated to implement. Maybe the webrtc perf is better in other parts of the world or something...
Yeah WebRTC is a bear to implement for sure. Very poorly designed API. It can definitely provide significant performance improvements over web sockets, but only when configured correctly (unordered/unreliable mode) and not in every case (peer-to-peer is an afterthought in the modern internet).
We got it in unreliable/unordered and it still barely moves the needle on network perf over websockets from what we see in north america connecting to another server in north america
I wouldn't expect a big improvement in average performance but the long tail of high latency cases should be improved by avoiding head-of-line blocking. Also peer-to-peer should be an improvement over client-server-client in some situations. Not for battle royale though I guess.
Edit: Very cool game! I love instant loading web games and yours seems very polished and fun to play. Has the web version been profitable, or is most of your revenue from the app stores? I wish I better understood the reasons web games (reportedly) struggle to monetize.
I would say WebRTC is both a must and only worth it if you need UDP, such as in the case of real-time video.
I mean, the only cases where UDP vs. TCP are going to matter are 1) if you experience packet loss (and maybe you aren't for whatever reason) and 2) if you are willing to actively try to shove other protocols around and not have a congestion controller (and WebRTC definitely has a congestion controller, with the default in most implementations being an algorithm about as good as a low-quality TCP stack).
Out-of-order delivery is another case where UDP provides a benefit.
Runs smoother than the Android home screen. :)
I found this issue indicating a bad idea for end user safety:
https://github.com/mozilla/standards-positions/issues/431
What about WebTransport? I thought that was the http/3 upgrade to WebSockets that supported unreliable and out-of-order messaging
Great fingerprinting vector. Expect nothing less from Google.
Status of specification: "It is not a W3C Standard nor is it on the W3C Standards Track."
Status in Chrome: shipping in 131
Expect people claiming this is a vital standard that Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ uncritically just include this
I prefer web apps to native apps any day. However, web apps are limited by what they can do.
But what they can do is not consistent - for example, it can take your picture and listen to your microphone if you give permissions; but it can't open a socket. Another example: Chrome came out with an File System Access API [2] in August; it's fantastic (I am using it) and it allows a class of native apps to be replaced by Web Apps. As a user, I don't mind having to jump through hoops (as a user) and giant warning screens to accept that permission - but I want this ability on the Web Platform.
For Web Apps to be able to complete with native apps, we need more flexibility Mozilla. [1]
[1]: https://mozilla.github.io/standards-positions/ [2]: https://developer.chrome.com/docs/capabilities/web-apis/file...
nah. we need even less. i rather webapps because of the limitations. much less to worry about
I’m excited, and anticipate some interesting innovation once browser applications can “talk UDP”. It’s a long time in the making. Gaming isn’t the end of it — being able to communicate with local network services (hardware) without involving an API intervening is very attractive.
Indeed. I'll finally be able to connect to your router and change your wifi password, all through your browser.
Can a browser run a web server with this?
Since it allows for accepting incoming TCP connections, this should allow for HTTP servers to run within the browser, although running directly on port 80/443 might not be supported everywhere (can't see it mentioned in the spec, but from what I remember on most *nix systems only root can listen on ports below 1024, though I might be mistaken since it's been a while)
I assume they would limit it to clients.
This means that we can finally do gRPC directly from browser.
Anything that moves the web closer to its natural end state— the J(S)VM is a win in my book. Making web apps a formally separate thing from pages might do some good for the web overall. We could start thinking about taking away features from the page side.
Great, so now a mis-click and your browser will have a field day infecting your printer, coffee machine and all the other crap that was previously shielded by NAT and/or a firewall.
As long as they don't change the spec, this will only be available to special locally installed apps in enterprise ChromeOS environments. I don't think their latest weird app format is going to make it to other browsers, so this will remain one of those weird Chrome only APIs that nobody uses.
Just now, when I have only recently switched permanently to Firefox...
Can't wait to see it working.
nice!