Not that it's super important, but Web Locks API is getting some circulation after a question about how you keep multiple pages from trying to use a (single-use only) oauth refresh token at the same time. Which is a pretty good use case for this feature! https://bsky.app/profile/ambarvm.bsky.social/post/3lakznzipt...
Makes me wonder what part of the pattern you think is "too clever"? I think it is fairly easy to reason about when the lock is restricted to the encompassing block and automatically dropped when you leave the block.
It’s kind of a weird design that some of your variables (which you can define anywhere in the scope FWIW) just randomly define a critical section. I strongly prefer languages that do a
with lock {
// do stuff
}
design. This could be C++ too to be honest because lambdas exist but RAII is just too common for people to design their locks like this.
If you really wanted it to be at the top level you could probably turn it into an explicit `release` call using a wrapper and two `Promise` constructors, though that would probably be a bad idea since it could introduce bugs
Subjectiveness aside on what’s more readable… your proposing a new language feature would be more readable than an API design. To me, the MDN proposal is declarative whereas your proposal is imperative. And with my subjectiveness, JavaScript shines in declarative programming.
'using' fixes having to do cleanup in a finally {} handler, which you can a) forget, and b) often messes up scoping of variables as you need to move them outside the 'try' block.
It also creates additional finalization semantics for a using-ed value. Which has all sorts of implications. lock.do(async (lock) => {work}) is a similar construct - a scope-limited aquisition that requires nothing special.
Catch/finally not sharing a scope with try is a repeated mistake, that I surely agree with!
I wish the compatibility tables on MDN gave an indicator of when a feature became available.
My ideal would be a thing that says "this hit 90% of deployed browsers 4 years ago", but just seeing the date it was added to each of the significant browser families would be amazingly useful.
Using the Web Locks API page as an example, let's say we want to know when `LockManager` was added to Chrome. Here are the steps:
1. View the page: https://developer.mozilla.org/en-US/docs/Web/API/Web_Locks_A...
2. Scroll down to the Browser compatibility table
3. Find the cell you're interested in, in this example we're looking at where the `LockManager` row meets the Chrome column.
4. We see a check and a version number (in this case 69).
So at this point we know that it has existed since version 69.
Now in the case that we don't know that Chrome's current version is already < 100 and we need to know when a feature gained support:
5. Click on the cell to view the timeline of that cell.
Now we know that it was released on 2018-09-04, and it never had a smaller release requiring browser flags/prefixes/experiment flags/etc...
I think this is the idea behind the Baseline <Year> standard you see on a lot of mdn features now, it shows the year when the feature was available in all 3 major browser engines
Why only in secure contexts? You can use storage APIs in insecure contexts, doing this by spinning, but the lock API which seems much more innocuous requires a secure context?
I believe most new capabilities are limited to secure contexts, in part as a way of discouraging bad habits, even when there’s no particular risk. The ideal is that everything should run in a secure context, but it’s a hard sell removing existing functionality from insecure contexts. If it were being added now, I’m pretty sure storage APIs would be secure-only.
I'd presume that a MITM could set, release and read locks, by which it might determine that you have certain sites open.
In general¹, I presume that any API that uses "same origin" as bounding criteria, must be secure. Since there's no way to enforce this "same origin" in insecure contexts.
--
¹ Aside from the idea that maybe just make all new APIs "secure only", just to discourage insecure contexts.
Why would that matter? The tabs don't share memory. Any code doesn't run when it tries to acquire a lot that another piece of code from another tab has already acquired. The two tabs don't even need to run the same app.
Well, it might matter for functionality in the application.
After you fix a lock-related bug for example, how do you deal with an open tab running a different version of your code that is erroneously misusing a lock?
You need to account for that when you release new code, yeah? Rename the lock maybe? Some other logic?
I’ve been using this for a while now. But one thing I recently worked on required these locks to be extremely efficient. Does anyone have any benchmarks on usage of these locks? Preferably compared to the use of rust’s tokio mutex from a wasm context.
You should write your own benchmarks! I've been using mitata for microbenchmarks which is what the bun and deno people use for their cool benchmark charts. It's fast and tries to call the system GC between runs which helps reduce bias. github: https://github.com/evanwashere/mitata
I find iterating in mitata super fun and a little addictive. It's hard to write a representative micro-benchmark, but optimizing them is still useful as long as you aren't making anything worse, which is often easy to avoid. I recently used mitata-benchmark-guided optimization to rewrite a core data structure at Notion for a 5% latency decrease on a few endpoints at p90/95/99. One of our returning interns used it to assess serialization libraries and she found one 3x faster. a+++ would recommend
I’d assume that if you have used them for a while, you have a rough understanding of the performance characteristics and/or the knowledge of how to write even a quick, rough benchmark for them.
Weird API to release the lock. What if you want to hold on to it? Then you need to do some silly promise wrapper. Would be better if there was a matching release() function.
The ergonomics here for 99.999% of uses seem great to me. Whatever async function you have can run for as long as it needs. That's using the language, not adding more userland craft. It's a good move.
Probably because there is no RAII semantics in JS and they don’t want to allow forgetting releasing the lock. Although the promise workaround is explicitly opting into this behavior
You mean Atomics + SharedArrayBuffer, otherwise it won't be shared across agents. I can imagine all the postMessage calls and message handlers swirling around in all agents to approximate something like the Web Locks API for a simple lock, but tbh I'd take the Web Locks API any day.
p = Promise.withResolvers()
navigator.locks.request(
"foo",
p.promise,
)
p.resolve()
I guess there’s room for .requestWithResolvers() still, they rarely learn the first lesson. Even the $subj article seems to be unaware of it and uses the silly wrapper way.
I guess I don't actually get the point, because if I am locking a resource in one tab so that other tabs can't use that resource... how is that not going to lead to behavior that a user would think was broken.
Two tabs try to refresh the token at the same time, user opened tab 2, tab 2 can't refresh because things locked in tab 1 - user thinks app is broken? Isn't that the way it would happen. I guess you can detect it is locked in another tab though so you could give the user a warning about this?
I guess I am missing something about the scenario..
It's not locking the UI refresh, it is only locking the access token refresh.
Let's play it out.
- User authenticates, gets an access token A1 and a refresh token R1. R1 is one time use only.
- A1 and R1 are stored as cookies (secure and httponly to prevent XSS).
- A1 is used to access multiple APIs, but is only good for 5 minutes.
- The SPA opens up a new tab for whatever reason, so the browser is making requests to the APIs from both tabs (T1 and T2)
- 5 minutes passes, and T1 tries to call an API. The API call fails because A1 is not valid. T1 then makes a request to the OAuth server with R1, which returns a new access token (A2), a new refresh token (R2) and invalidates R1.
- 0.1 seconds later, T2 tries to call an API with A1, and also discovers it needs to get a new access token. It tries to use R1, but R1 is invalid because it has already been used
With this locking API, T1 could lock access to R1 when it was about to use it. T2 would then see the lock request fail, and not try to use R1. After R2 has been stored, T2 could use R2 (or T2 could use A2).
Maybe it's old person yelling at cloud vibes, but I'm already annoyed enough at the majority of SPAs for breaking so many useful default browser features, such as stable links, navigation, multi-tab functionality, open in new tab etc. This is only gonna make them even more cancer my gut feeling tells me.
IndexedDB transactions give guarantees about atomicity but not about isolation. Note that the word "isolation" doesn't appear in the spec https://w3c.github.io/IndexedDB/
I've actually used this once! I built a slightly overengineered HTTP client where it would use a Reader Writer lock on the auth token. That way, if a token refresh request was taking place, all new requests would wait for it to finish writing, before being sent
I'm not sure that this API is a good thing. That is already a pain in the ass when webapp are only working in a single tab and kind of log you out of other tabs.
Also, when you have a lot of tabs, I can easily see web page strangely broken.
A lease has a time limit. A lock does not. Clearing stale locks manually is a PITA. I still assume, being a Web-scale contract, the lock would be automatically cleared if the browser is restarted or something. But honestly a lease makes users do better design from the get-go.
I might agree in other contexts, but not with the use case here and how the API is designed.
It looks like the only potential for a "stale lock" is if somehow the async function passed to the request method hangs forever. But in web contexts I think that would be extremely unlikely for everyday use cases (e.g. most of the time I could imagine the async callback making remote calls using fetch, but normally that fetch has its own timeout). In contexts where it could happen, I'd argue it's better to make the caller explicitly handle that case (e.g. by using `steal`) than potentially leave things in an indeterminate state because a lease timeout expired.
There is no real safe way to use lock hold timeouts. While a waiter can timeout and possibly handle failing to acquire the lock, there's no generic safe way to steal the lock from the holder after a timeout since the holder may still be accessing the protected resources/have left the resources in an inconsistent state. Adding a wait timeout which generates telemetry on a long wait may be useful for helping catch failures in production, but seizing the lock is almost always the wrong way to go about this.
Expect deadlocks in web applications now? I wouldn't necessarily trust a JS programmer with a lock, sorry. They are hard enough in C++ or other languages that generally require a lot more discipline.
I have a hard time picturing how an application can be considered anything other than completely broken once a couple threads/workers have deadlocked, so I don't know what any of that quote means. Yeah, I get that browsers isolate tabs and that the damage is contained.
You seem to be expecting these locks to block a thread, but they do not. A "deadlock" with these locks is simply a chunk of heap space holding a bunch of promises that will never resolve, occupying a few slots in the global event loop's select statement.
Not that it's super important, but Web Locks API is getting some circulation after a question about how you keep multiple pages from trying to use a (single-use only) oauth refresh token at the same time. Which is a pretty good use case for this feature! https://bsky.app/profile/ambarvm.bsky.social/post/3lakznzipt...
> navigator.locks.request("my_resource", async (lock) => {
This would be so much more readable with `using`
(https://github.com/tc39/proposal-explicit-resource-managemen...)It doesn't actually feel more readable to me. I find the idea that the lock declaration sits at the same level as the lock content confusing.
It's readable if you're familiar with the RAII pattern which is used in languages like C++ and Go.
It's also similar to C#'s "using ...;" without a block. Syntax sugar there rather than RAII, but looks the same.
C#'s using can be used without a block. It disposes the resource at the end of a current scope.
I always felt that pattern was a bit too clever than it was a good design.
Makes me wonder what part of the pattern you think is "too clever"? I think it is fairly easy to reason about when the lock is restricted to the encompassing block and automatically dropped when you leave the block.
It’s kind of a weird design that some of your variables (which you can define anywhere in the scope FWIW) just randomly define a critical section. I strongly prefer languages that do a
design. This could be C++ too to be honest because lambdas exist but RAII is just too common for people to design their locks like this.Well, Go doesn't quite support RAII.
This syntax looks more like Python or Rust.
It's not RAII, but closer to dynamic-wind.
Yes, it sits at the same level to signify the lifetime of the lock.
Why can't we just `await` the lock call?
Edit: Nevermind, release of the lock is automatic when the callback resolves its promise. I get it now.
If you really wanted it to be at the top level you could probably turn it into an explicit `release` call using a wrapper and two `Promise` constructors, though that would probably be a bad idea since it could introduce bugs
Subjectiveness aside on what’s more readable… your proposing a new language feature would be more readable than an API design. To me, the MDN proposal is declarative whereas your proposal is imperative. And with my subjectiveness, JavaScript shines in declarative programming.
You can probably wrap it to have that API
Sadly it needs a language feature that doesn't exist yet
What do you mean? You can definitely already wrap that into a Symbol.dispose-keyed object and use it via `using`
`using` is not a thing in JS
s/yet/thankfully/. We don’t need it cause it solves no real problem and the solution is loaded with implicit complexity.
'using' fixes having to do cleanup in a finally {} handler, which you can a) forget, and b) often messes up scoping of variables as you need to move them outside the 'try' block.
It also creates additional finalization semantics for a using-ed value. Which has all sorts of implications. lock.do(async (lock) => {work}) is a similar construct - a scope-limited aquisition that requires nothing special.
Catch/finally not sharing a scope with try is a repeated mistake, that I surely agree with!
If it only weren’t for that crippled "using" syntax. On the other hand, though:
I mean, one might write this just as well with the callback interface, but this is much easier to do accidentally.I wish the compatibility tables on MDN gave an indicator of when a feature became available.
My ideal would be a thing that says "this hit 90% of deployed browsers 4 years ago", but just seeing the date it was added to each of the significant browser families would be amazingly useful.
It does.
Using the Web Locks API page as an example, let's say we want to know when `LockManager` was added to Chrome. Here are the steps:
1. View the page: https://developer.mozilla.org/en-US/docs/Web/API/Web_Locks_A... 2. Scroll down to the Browser compatibility table 3. Find the cell you're interested in, in this example we're looking at where the `LockManager` row meets the Chrome column. 4. We see a check and a version number (in this case 69).
So at this point we know that it has existed since version 69.
Now in the case that we don't know that Chrome's current version is already < 100 and we need to know when a feature gained support:
5. Click on the cell to view the timeline of that cell.
Now we know that it was released on 2018-09-04, and it never had a smaller release requiring browser flags/prefixes/experiment flags/etc...
I honestly never thought to click on those! Thanks very much.
Now I'm digging around in https://github.com/mdn/browser-compat-data/blob/main/api/Loc... and trying to find the browser release dates data...
Ooh, https://bcd.developer.mozilla.org/bcd/api/v0/current/api.Loc... is even better - it expands those browser dates and it's served with access-control-allow-origin: *
I used that to build this tool: https://tools.simonwillison.net/mdn-timelines#Lock
It lets you search for an API and then displays a timeline. Details on how I built it here: https://github.com/simonw/tools/commit/59323c6a30271c856aabb... and https://github.com/simonw/tools/commit/472b46fda02e912c43604...
Blogged about it here: https://simonwillison.net/2024/Nov/11/mdn-browser-support-ti...
thank you for posting transcripts!
See also https://caniuse.com/mdn-api_lock, “Date relative” view is mostly better.
I think this is the idea behind the Baseline <Year> standard you see on a lot of mdn features now, it shows the year when the feature was available in all 3 major browser engines
You can get this from caniuse, right? Would be a simple enough browser extension to marry the two together.
https://caniuse.com/mdn-api_lock
caniuse.com
Why only in secure contexts? You can use storage APIs in insecure contexts, doing this by spinning, but the lock API which seems much more innocuous requires a secure context?
I believe most new capabilities are limited to secure contexts, in part as a way of discouraging bad habits, even when there’s no particular risk. The ideal is that everything should run in a secure context, but it’s a hard sell removing existing functionality from insecure contexts. If it were being added now, I’m pretty sure storage APIs would be secure-only.
Search around and you’ll find various information and explanations about it. https://blog.mozilla.org/security/2018/01/15/secure-contexts... is one, though https://w3ctag.github.io/design-principles/#secure-context has apparently been significantly watered down from what it was originally—see the initial proposal in https://github.com/w3ctag/design-principles/pull/75, and what was then merged in https://github.com/w3ctag/design-principles/pull/89.
I'd presume that a MITM could set, release and read locks, by which it might determine that you have certain sites open.
In general¹, I presume that any API that uses "same origin" as bounding criteria, must be secure. Since there's no way to enforce this "same origin" in insecure contexts.
--
¹ Aside from the idea that maybe just make all new APIs "secure only", just to discourage insecure contexts.
Or also just lock resources and stop other pages from working.
If certain lock names become well-known, maybe you could DoS browsers by holding random locks and never releasing them?
I guess but since they're origin-bound can you do anything except DoS your page?
How does a shared memory space work if you have different versions of scripts for the same domain?
Same way a shared database works across multiple client versions as you roll out a new deployment.
Or the same way two completely different processes can access the same address space with shared memory IPC.
You aren't running in the same memory space, you're just communicating with a shared resource.
Why would that matter? The tabs don't share memory. Any code doesn't run when it tries to acquire a lot that another piece of code from another tab has already acquired. The two tabs don't even need to run the same app.
Well, it might matter for functionality in the application.
After you fix a lock-related bug for example, how do you deal with an open tab running a different version of your code that is erroneously misusing a lock?
You need to account for that when you release new code, yeah? Rename the lock maybe? Some other logic?
What are you imagining doing with these web locks apis?
The need to test both versions being active.
I’ve been using this for a while now. But one thing I recently worked on required these locks to be extremely efficient. Does anyone have any benchmarks on usage of these locks? Preferably compared to the use of rust’s tokio mutex from a wasm context.
You should write your own benchmarks! I've been using mitata for microbenchmarks which is what the bun and deno people use for their cool benchmark charts. It's fast and tries to call the system GC between runs which helps reduce bias. github: https://github.com/evanwashere/mitata
I find iterating in mitata super fun and a little addictive. It's hard to write a representative micro-benchmark, but optimizing them is still useful as long as you aren't making anything worse, which is often easy to avoid. I recently used mitata-benchmark-guided optimization to rewrite a core data structure at Notion for a 5% latency decrease on a few endpoints at p90/95/99. One of our returning interns used it to assess serialization libraries and she found one 3x faster. a+++ would recommend
If you’ve been using them for a while, don’t you have any benchmarks?
Don’t see why you’d assume that. Not all applications are time critical.
I’d assume that if you have used them for a while, you have a rough understanding of the performance characteristics and/or the knowledge of how to write even a quick, rough benchmark for them.
Weird API to release the lock. What if you want to hold on to it? Then you need to do some silly promise wrapper. Would be better if there was a matching release() function.
The ergonomics here for 99.999% of uses seem great to me. Whatever async function you have can run for as long as it needs. That's using the language, not adding more userland craft. It's a good move.
Probably because there is no RAII semantics in JS and they don’t want to allow forgetting releasing the lock. Although the promise workaround is explicitly opting into this behavior
Javascript in browsers already has a full atomics API:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
I'm not sure why Web Locks is useful TBH. I guess if you don't understand atomics it's a friendlier API?
The Web Locks documentation explains that it works across tabs, i.e. separate processes:
”[…] allows a web app running in multiple tabs or workers to coordinate work and the use of resources”
A locking API is much more natural and less error-prone for this use case than using shared memory and atomics.
You mean Atomics + SharedArrayBuffer, otherwise it won't be shared across agents. I can imagine all the postMessage calls and message handlers swirling around in all agents to approximate something like the Web Locks API for a simple lock, but tbh I'd take the Web Locks API any day.
It's "atomic" in the sense that it does one thing. It lets you do "actions", not "transactions". A transaction would allow you to do multiple things.
If the atomics API gave you ability to do multiple things, you wouldn't need compareExchange, because you could just do compare and then exchange.
You're not treating the aquire phase correctly:
...the release phase still feels off without a Promise, but maybe somebody else can tackle that :DEDIT: think I fixed it, untested though
ah! nice
[dead]
I assume this is at the browser "session" level? In another browser, or private-session, would the locks be distinct?
Ciao guys, what could be the use case for such API?
This was the use case[0] that brought it to my attention:
- single page application using access and refresh tokens to interact with an API
- refresh is one time use, as recommended by the OAuth security best practices[1]
- SPA is open in more than one tab
- two tabs try to refresh the token at the same time, second one fails because refresh token is used up
0: https://bsky.app/profile/ambarvm.bsky.social/post/3lakznzipt...
1: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-secur...
Thanks for this, I've been using the localstorage for the same use case a while ago, I'll try to go back and update it.
I guess I don't actually get the point, because if I am locking a resource in one tab so that other tabs can't use that resource... how is that not going to lead to behavior that a user would think was broken.
Two tabs try to refresh the token at the same time, user opened tab 2, tab 2 can't refresh because things locked in tab 1 - user thinks app is broken? Isn't that the way it would happen. I guess you can detect it is locked in another tab though so you could give the user a warning about this?
I guess I am missing something about the scenario..
It's not locking the UI refresh, it is only locking the access token refresh.
Let's play it out.
- User authenticates, gets an access token A1 and a refresh token R1. R1 is one time use only.
- A1 and R1 are stored as cookies (secure and httponly to prevent XSS).
- A1 is used to access multiple APIs, but is only good for 5 minutes.
- The SPA opens up a new tab for whatever reason, so the browser is making requests to the APIs from both tabs (T1 and T2)
- 5 minutes passes, and T1 tries to call an API. The API call fails because A1 is not valid. T1 then makes a request to the OAuth server with R1, which returns a new access token (A2), a new refresh token (R2) and invalidates R1.
- 0.1 seconds later, T2 tries to call an API with A1, and also discovers it needs to get a new access token. It tries to use R1, but R1 is invalid because it has already been used
With this locking API, T1 could lock access to R1 when it was about to use it. T2 would then see the lock request fail, and not try to use R1. After R2 has been stored, T2 could use R2 (or T2 could use A2).
thanks, this was the part that clarified for me "After R2 has been stored, T2 could use R2 (or T2 could use A2)."
Maybe it's old person yelling at cloud vibes, but I'm already annoyed enough at the majority of SPAs for breaking so many useful default browser features, such as stable links, navigation, multi-tab functionality, open in new tab etc. This is only gonna make them even more cancer my gut feeling tells me.
Oh yeah, I agree. I personally like the rails approach of light JS for necessary interaction, but keeping all logic serverside.
The URL is the command line and all that.
But like you, I sometimes feel like an old person yelling at the cloud.
if a crypto miner infects a site you have several tabs open they won't fight for cpu at the same time.
Besides what sibling has written, write access to IndexedDB also often needs to be guarded by a mutex https://gist.github.com/pesterhazy/4de96193af89a6dd5ce682ce2...
I don't get this complaint, because IndexedDB supports transactions. Why are these insufficient?
IndexedDB transactions give guarantees about atomicity but not about isolation. Note that the word "isolation" doesn't appear in the spec https://w3c.github.io/IndexedDB/
share a websocket/sse connection via a worker
Click the link, read the words. There's a list in it.
I've actually used this once! I built a slightly overengineered HTTP client where it would use a Reader Writer lock on the auth token. That way, if a token refresh request was taking place, all new requests would wait for it to finish writing, before being sent
The only reason I know and have used this api is because it helps prevent the tab from going to sleep if you hold on to the lock: https://techcommunity.microsoft.com/discussions/edgeinsidera...
I'm not sure that this API is a good thing. That is already a pain in the ass when webapp are only working in a single tab and kind of log you out of other tabs.
Also, when you have a lot of tabs, I can easily see web page strangely broken.
A lease is usually a better choice than a lock.
A lease has a time limit. A lock does not. Clearing stale locks manually is a PITA. I still assume, being a Web-scale contract, the lock would be automatically cleared if the browser is restarted or something. But honestly a lease makes users do better design from the get-go.
I might agree in other contexts, but not with the use case here and how the API is designed.
It looks like the only potential for a "stale lock" is if somehow the async function passed to the request method hangs forever. But in web contexts I think that would be extremely unlikely for everyday use cases (e.g. most of the time I could imagine the async callback making remote calls using fetch, but normally that fetch has its own timeout). In contexts where it could happen, I'd argue it's better to make the caller explicitly handle that case (e.g. by using `steal`) than potentially leave things in an indeterminate state because a lease timeout expired.
There is no real safe way to use lock hold timeouts. While a waiter can timeout and possibly handle failing to acquire the lock, there's no generic safe way to steal the lock from the holder after a timeout since the holder may still be accessing the protected resources/have left the resources in an inconsistent state. Adding a wait timeout which generates telemetry on a long wait may be useful for helping catch failures in production, but seizing the lock is almost always the wrong way to go about this.
Time limits are a recipe for non-determinism. Non-determinism is generally not what you want.
Put another way, combining side effects and timers invariably causes race conditions.
Is there a native browser API for leases?
Where did the steal method come from? Haven’t done much locking, but I haven’t ever seen lock stealing before
"steal" referres to "acquire unsafely"
Never seen it before for locks, but I guess it's to deal with some bugs caused from some other code running from that origin.
Sometimes I wonder why we even bothered with javascript and didn't just use POSIX.
Using global names for this seems like a bad idea.
What's the problem? The whole concept is that it's locking a resource super-globally, not only in the current tab, but across tabs.
Ah I misread. The intro made it sound like this was for locking within a tab, but it's within a origin.
How many layers of namespacing do you want in between? 1? 2? 10? Perhaps we should go the SNMP route and start with 1.3.6.1.4.1?
Using global for bad ideas is kind of par for the course for all things JavaScript.
a lock API in single threaded js vm. is a var true/false really too hard these days that we need an API?
Expect deadlocks in web applications now? I wouldn't necessarily trust a JS programmer with a lock, sorry. They are hard enough in C++ or other languages that generally require a lot more discipline.
I wish they adopted more of an actor model to emulate concurrency.
Or at least made it easier. I think I introducing locks is a mistake for the browser.
> deadlocks only affect the locks themselves and code depending on them; the browser, other tabs, and other script in the page is not affected.
Also, you don't need to use this API to lock up your web app.
I have a hard time picturing how an application can be considered anything other than completely broken once a couple threads/workers have deadlocked, so I don't know what any of that quote means. Yeah, I get that browsers isolate tabs and that the damage is contained.
You seem to be expecting these locks to block a thread, but they do not. A "deadlock" with these locks is simply a chunk of heap space holding a bunch of promises that will never resolve, occupying a few slots in the global event loop's select statement.
Well yes, but some part of code that supposed to run, never runs because of a deadlock.