This is nothing to do with async Rust; monoio (and possibly other io-uring libraries) are just exposing a flawed API. My ringbahn library written in 2019 correctly handled this case by having a dropped accept future register a cancellation callback to be executed when the accept completes.
You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.
The solution proposed in this post doesn't work, though: if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked. io-uring's async cancellation mechanism is just an optimization opportunity and doesn't synchronize anything, so it can't be relied on for correctness here. My library could have submitted a cancellation when the future drops as such an optimization, but couldn't have relied on it to ensure the accept does not complete.
> if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked.
If the accept completes before the cancel SQE is submitted, the cancel operation will fail and the runtime will have a chance to poll the CQE in place and close the fd.
> You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.
This is still a suboptimal solution as you've accepted a connection, informing the client side of this, and then killed it rather than never accepting it in the first place. (Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.)
Now it's true that "never accepting it in the first place" might not be possible with io_uring in some cases but rather than hiding that under drop the code, it should be up front about it and prevent dropping (not currently possible in rust) in a situation where there might be uncompleted in-flight requests before you've explicitly made a decision between "oh okay then, let's handle this one last request" and "I don't care, just hang up".
If you want the language to encode a liveness guarantee that you do something meaningful in response to an accept rather than just accept and close you do need linear types. I don't know any mainstream language that encodes that guarantee in its type system, whatever IO mechanism it uses.
This all feels like the abstraction level is wrong. If I think of a server as doing various tasks, one of which is to periodically pull an accepted connection off the listening socket, and I cancel that task, then, sure, the results are awkward at best and possibly wrong.
But I’ve written TCP servers and little frameworks, asynchronously, and this whole model seems wrong. There’s a listening socket, a piece of code that accepts connections, and a backpressure mechanism, and that entire thing operates as a unit. There is no cancellable entity that accepts sockets but doesn’t also own the listening socket.
Or one can look at this another way: after all the abstractions and libraries are peeled back, the example in the OP is setting a timeout and canceling an accept when the timeout fires. That’s rather bizarre — surely the actual desired behavior is to keep listening (and accepting when appropriate) and do to the other timed work concurrently.
It just so happens that, at the syscall level, a nonblocking (polled, selected, epolled, or even just called at intervals) accept that hasn’t completed is a no-op, so canceling it doesn’t do anything, and the example code works. But it would fail in a threaded, blocking model, it would fail in an inetd-like design, and it fails with io_uring. And I really have trouble seeing linear types as the solution — the whole structure is IMO wrong.
(Okay, maybe a more correct structure would have you “await connection_available()” and then “pop a connection”, and “pop a connection” would not be async. And maybe a linear type system would prevent one from being daft, successfully popping a connection, and then dropping it by accident.)
> maybe a more correct structure would have you “await connection_available()” and then “pop a connection”
This is the age-old distinction between a proactor and reactor async design. You can normally implement one abstraction of top of the other, but the conversion is sometimes leaky. It happens that the underlying OS "accept" facility is reactive and it doesn't map well to a pure async accept.
I’m not sure I agree. accept() pops from a queue. You can wait—and-pop or you can pop-or-fail. I guess the former fits in a proactor model and the latter fits in a reactor model, but I think that distinction misses the point a bit. Accepting sockets works fine in either model.
It breaks down in a context where you do an accept that can be canceled and you don’t handle it intelligently. In a system where cancellation is synchronous enough that values won’t just disappear into oblivion, one could arrange for a canceled accept that succeeded to put the accepted socket on a queue associated with the listening socket, fine. But, in general, the operation “wait for a new connection and irreversibly claim it as mine IMO just shouldn’t be done in a cancellable context, regardless of whether it’s a “reactor” or a “proactor”. The whole “select and, as one option, irrevocably claim a new connection” code path in the OP seems suspect to me, and the fact that it seems to work under epoll doesn’t really redeem it in my book.
> Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.
listen(2) takes a backlog parameter that is the number of queued (which I think it means ack'd) but not yet popped (i.e. listen'd) connections.
What makes you think I don't know who I'm talking to? All I was trying to do was encourage another blog post. I literally went out of my way to try to sound nice and preemptively dismiss the chance that I was interpreted sarcastically.
The rest of this blog discusses how to continue processing operations after cancellation fails, which is blocked by the Rust abstraction. Yes, not everyone (probably very few) defines this as a safety issue, I wrote about this at the end of the blog.
I don't consider Yosh Wuyts's concept of "halt safety" coherent, meaningful or worth engaging with. It's true that linear types would enable the encoding of additional liveness guarantees that Rust's type system as it exists cannot encode, but this doesn't have anything to do with broken io-uring libraries leaking resources.
Continuing process after cancellation failure is a challenge I face in my actual work, and I agree that "halt-safety" lacks definition and context. I have also learned and been inspired a lot from your blogs, I appreciate it.
Agree. When I hear “I wish Rust was Haskell” I assume the speaker is engaged in fantasy, not in engineering. The kernel is written in C and seems to be able to manage just fine. Problem is not Rust. Problem is wishing Rust was Haskell.
Well, it's "about" async Rust and io-uring inasmuch as they represent incompatible paradigms.
Rust assumes as part of its model that "state only changes when polled". Which is to say, it's not really "async" at all (none of these libraries are), it's just a framework for suspending in-progress work until it's ready. But "it's ready" is still a synchronous operation.
But io-uring is actually async. Your process memory state is being changed by the kernel at moments that have nothing to do with the instruction being executed by the Rust code.
You are completely incorrect. You're responding to a comment in which I link to a library which handles this correctly, how could you persist in asserting that they are incompatible paradigms? This is the kind of hacker news comment that really frustrates me, it's like you don't care if you are right or wrong.
Rust does not assume that state changes only when polled. Consider a channel primitive. When a message is put into a channel at the send end, the state of that channel changes; the task waiting to receive on that channel is awoken and finds the state already changed when it is polled. io-uring is really no different here.
What you're describing is a synchronous process, though! ("When a message is put..."). That's the disconnect in the linked article. Two different concepts of asynchrony: one has to do with multiple contexts changing state without warning, the other (what you describe) is about suspending threads contexts "until" something happens.
Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?
With io-uring the kernel writes CQEs into a ring buffer in shared memory and the user program reads them: its literally just a bounded channel, the same atomic synchronizations, the same algorithm. There is no difference whatsoever.
The io-uring library is responsible for reading CQEs from that ring buffer and then dispatching them to the task that submitted the SQE they correspond to. If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE. According to this blog post, monoio fails to do so. That's all that's happening here.
> Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?
I think you're being unduly harsh here. There are a variety of voices here, of various levels of expertise. If someone says something you think is incorrect but it seems that they are speaking in good faith then the best way to handle the situation is to politely provide a correct explanation.
If you really think they are in bad faith then calmly call them out on it and leave the conversation.
the post only talks about "future state", maybe I'm not clearly to point out this. with epoll, accept syscall and future state changing is happened in the same polling, which io_uring is not. Once accept syscall is complete, future has already advanced to complete, but actually it is not at that moment in the real world Rust.
It's true, there's a necessary layer of abstraction with io-uring that doesn't exist with epoll.
With epoll, the reactor just maps FDs to Wakers, and then wakes whatever Waker is waiting on that FD. Then that task does the syscall.
With io-uring, instead the reactor is reading completion events from a queue. It processes those events, sets some state, and then wakes those tasks. Those tasks find the result of the syscall in that state that the reactor set.
This is the difference between readiness (epoll) and completion (io-uring): with readiness the task wakes when the syscall is ready to be performed without blocking, with completion the task wakes when the syscall is already complete.
When a task loses interest in an event in epoll, all that happens is it gets "spuriously awoken," so it sees there's nothing for it to do and goes back to sleep. With io-uring, the reactor needs to do more: when a task has lost interest in an incomplete event, that task needs to set the reactor into a state where instead of waking it, it will clean up the resources owned by the completion event. In the case of accept, this means closing that FD. According to your post, monoio fails to do this, and just spuriously wakes up the task, leaking the resource.
The only way this relates to Rust's async model is that all futures in Rust are cancellable, so the reactor needs to handle the possibility that interest in a syscall is cancelled or the reactor is incorrect. But its completely possible to implement an io-uring reactor correctly under Rust's async model, this is just a requirement to do so.
> The title of this blog might sound a bit dramatic, but everyone has different definitions and understandings of "safety."
Still in Rust community "safety" is used in a very specific understanding, so I don't think it is correct to use any definition you like while speaking about Rust. Or at least, the article should start with your specific definition of safety/unsafety.
I don't want to reject the premise of the article, that this kind of safety is very important, but for Rust unsafety without using "unsafe" is much more important that an OS dying from leaked connections. I have read through the article looking for rust's kind of unsafety and I was found that I was tricked. It is very frustrating, it looks to me as a lie with some lame excuses afterwards.
I apologize for not using a good title, but I think the issues I listed (especially my argument that we should break issues down to bugs in runtime and the limitations of Rust abstractions) are worth discussing, even if there are many people who argue otherwise
Agreed, though a better title would probably not use the term "safe" unqualified (e.g. "Async rust with io_uring leaks resources (and risks corrupting program state").
So in this case it is still a form of safety that’s well-defined in rust: cancel safety. The io-uring library doesn’t have the same cancel safety guarantees that everyone is used to in epoll libraries. In Tokio, the cancel safety of `accept` is well documented even though it works the way you’d expect, but in monoio, it’s literally just documented as `Accept` with no mention of the cancel safety issues when using that function.
Interesting. I would have thought that leak-free is part of the premise, since you can very well right C or C++ with a guarantee of no use after free at least, assuming you don't care about memory leaks.
Notably, io_uring syscall has been a significant source of vulnerabilities. Last year, Google security team decided to disable it in their products (ChromeOS, Android, GKE) and production servers [1].
Containerd maintainers soon followed Google recommendations and updated seccomp profile to disallow io_uring calls [2].
io_uring was called out specifically for exposing increased attack surface by kernel security team as well long before G report was released [3].
Seems like less of a rust issue and more of a bug(s) in io_uring? I suppose user space apps can provide bandaid fix but ultimately needs to be handled at kernel.
> Seems like less of a rust issue and more of a bug(s) in io_uring?
I'm working with io_uring currently and have to disagree hard on that one; io_uring definitely has issues, but the one here is that it's being used incorrectly, not something wrong with io_uring itself.
The io_uring issues overall also disaggregate in mostly 2 overall categories:
- lack of visibility into io_uring operations since they are no longer syscalls. This is an issue of adding e.g. seccomp and ptrace equivalents into io_uring. It's not something I'd even call a vulnerability, more of a missing feature.
- implementation correctness and concurrency issues due to its asynchronicity. It's just hard to do this correctly and bugs are being found and fixed. Some are security vulnerabilities. I'd call this a question of time for it to get stable and ready but I have no reason to believe this won't happen.
Strongly disagree. At the level of io_uring (syscalls/syscall orchestration), it is expected that available tools are prone to mis-use, and that libraries/higher layers will provide abstractions around them to mitigate that risk.
This isn't like the Rust-vs-C argument, where the claim is that you should prefer the option of two equivalently-capable solutions in a space that doesn't allow mis-use.
This is more like assembly language, or the fact that memory inside kernel rings is flat and vulnerable: those are low-level tools to facilitate low-level goals with a high risk of mis-use, and the appropriate mitigation for that risk is to build higher-level tools that intercept/prevent that mis-use.
That is all well and true, and the vulnerabilities are getting fixed, but that is off-topic to the posted article.
The article is more about the Rust io_uring async implementation breaking assumption that Rust's async makes, in that a Future can only get modified when it's poll()-ed.
I'm guessing that assumption came from an expectation that all async runtimes live in userland, and this newfangled kernel-backed runtime does things on its own inside the kernel, thus breaking the original assumption.
I mean, it’s only a problem if your design is based on the Future having exclusive ownership of its read buffer, but io\_uring assumes a kind of shared ownership. The “obvious” solution is to encode that ownership model in the design, which implies some kind of cancellation mechanism. C and C++ programs have to do that too.
"So I think this is the solution we should all adopt and move forward with: io-uring controls the buffers, the fastest interfaces on io-uring are the buffered interfaces, the unbuffered interfaces make an extra copy. We can stop being mired in trying to force the language to do something impossible. But there are still many many interesting questions ahead."
Resource leaks have nothing to do with safety. That's true both for memory safety and i/o safety. See for yourself with `mem::forget(File::open(...)?)`
Rust’s standard library almost provides I/O safety, a guarantee that if one part of a program holds a raw handle privately, other parts cannot access it.
According to the article:
I/O Safety: Ensuring that accepted TCP streams are properly closed without leaking connections.
These are not the same definition.
As I've mentioned several times [1], in Rust, the word "safety" is being abused to the point of causing confusion, and it looks like this is another instance of that.
Yeah, "safe Rust" is officially allowed to leak memory and other resources.
- The easiest way to do this is mem::forget, a safe function which exists to leak memory deliberately.
- The most common real way to leak memory is to create a loop using the Rc<T> or Arc<T> reference count types. I've never seen this in any of my company's Rust code, but we don't write our own cyclic data structures, either. This is either a big deal to you, or a complete non-issue, depending on your program architecture.
Basically, "safe Rust" aims to protect you from memory corruption, undefined behavior, and data races between threads. It does not protect you from leaks, other kinds of race conditions, or (obviously) logic bugs.
Where do you conclude "Rust promises I/O safety in the future"? An RFC is not a promise to do anything... it may represent a desire... a possibility, but you taking that leap and implying a promise is a flagrant misrepresentation.
Now let's take "the future" part... you seem to be impugning Async Rust for something it's not even purported to do in the present. What's the point of this?
You found a bug in monoio it seems... I don't see the argument you've presented as supporting the premise that "Async Rust is not safe".
That going to the sleep branch of the select should cancel the accept? Will cancelling the accept terminate any already-accepted connections? Shouldn't it be delayed instead?
Shouldn't newly accepted connections be dropped only if the listener is dropped, rather than when the listener.accept() future is dropped? If listener.accept() is dropped, the queue should be with the listener object, and thus the event should still be available in that queue on the next listener.accept().
This seems more like a bug with the runtime than anything.
the ideal scenario is like cancellable io provide by monoio, I write an example of this in the blog: https://github.com/ethe/io-uring-is-not-cancellation-safe/bl... . However, it has lots of limitation, and do not have a perfect way to do this at the moment.
That's an implementation detail. What's the user-facing behavior? What should happen with mistakenly-accepted connections?
Even the blog admits that cancellation of any kind, is racing with the kernel which might complete the accept request anyway. Even if you call `.cancel()`, the queue might have an accepted connection FD in it. Even if it doesn't, it might do by the time the kernel acknowledges the cancellation.
So you now have a mistakenly-accepted connection. What do you do with it? Drop it? That seems like the wrong answer, whoever writes a loop like the one in the blog will definitely not expect some of the connections mysteriously being dropped.
Okay, looks like withoutboats gave the answer to this in another thread [1], and that seems like the right answer. The accept() future being dropped must not result in any cancellation of any kind, unless the listener itself is also dropped.
This is an implementation issue with monoio that just needs more polishing. And given how hard io_uring is to get right, monoio should be given that time before being chosen to be used in production for anything.
I don't think the operation that completes after cancellation failed is "mistakenly-accepted," it should be handled in the normal way, but I admit that there are lots of people don't agree that
This aspect of io_uring does affect a lot of surface APIs, as I have experienced at work. At least for me I didn't have to worry much about borrowing though.
Hmm. Does it? Python's futures have an explicit .cancel() operation. And the C io_uring usage I'm looking at knows to cancel events too…
It's really that Rust might've made a poor choice here, as the article points out:
> Async Rust makes a few core assumptions about futures:
> 1. The state of futures only change when they are polled.
> 2. Futures are implicitly cancellable by simply never polling them again.
But at the same time, maybe this is just that Rust's Futures need to be used different here, in conjunction with a separate mechanism to manage the I/O operation that knows things need to be cancelled?
That part of the article is kinda irrelevant in my opinion. Futures do require polling to move forward, but polling can be forced by an external signal (otherwise the whole future model wouldn't work!). Therefore io_uring can be safely implemented by having central worker threads which then signal outstanding futures; that was how I ended up doing at work as well. So the article actually seems to ask whether such out-of-band mechanism can be entirely prevented or not.
The sibling comment to yours points out cancelling Futures is dropping Futures, what's your experience / do you think that would work to prevent needing the out-of-band mechanism?
That's backwards... Rust's way to do cancellation is simply to drop the future (i.e. let it deallocated). There is one big caveat here though, namely the lack of async drop as others pointed out.
In the current Rust io_uring-like stuffs can be safely implemented with an additional layer of abstraction. Some io_uring operations can be ongoing when it looks fine to borrow the buffer, sure. Your API just has to ensure that it is not possible to borrow until all operations are finished then! Maybe it can error, or you can require something like `buf.borrow().await`. This explicit borrowing is not an alien concept in Rust (cf. RefCell etc.) and probably the best design at the moment, but it does need dynamic bookkeeping which some may want to eliminate.
> First of all, we are fortunate that the I/O safety problem can be addressed now, which safe Rust aims to ensure this in the future. Rust provides the Drop trait to define custom behavior when a value is cleaned up. Thus, we can do something like this...
> We just need to encourage async runtimes to implement this fix.
This likely needs async drop if you need to perform a follow up call to cancel the outstanding tasks or closing the open sockets. Async Drop is currently experimental:
Ah, Thanks, that makes sense, but then I don't understand how this isn't just a bug in these Rust runtimes. As in: the drop codepath on the future needs to not only submit the cancellation SQE into io_uring, it also needs to still process CQEs from the original request that pop up before the CQE for the cancellation…
NB: I have only done a little bit of Rust, but am hoping to move there in the future — but I am working on C code interfacing io_uring… I will say doing this correctly does in fact require a bit of brainpower when writing that code.
I am not well versed in the async things as of late, but one complication is that the drop implementation is a blocking one. This could easily lead to deadlocks. Or the drop implementation could spawn an async task to clean up after itself later.
I have tried to learn rust and borrow checker is no problem but I can't get lifetimes and then Rc, Box, Arc Pinning along with async Rust are a whole another story.
Having programmed in raw C, I know Rust is more like Typescript if you once try it after writing Javascript, you can't go back for anything serious in plain Javascript. You would want to have some guard rails better than having no guard rails.
Try embedded rust: Get an RP2040 board and fool around with that. It'll make a lot more sense to you if the parts you don't understand are encapsulation types like RC, Box, and Arc because those aren't really used in embedded rust!
Since io_uring has similar semantics to just about every hardware device ever (e.g. NVMe submission and completion queues), are there any implications of this for Rust in the kernel? Or in SPDK and other user-level I/O frameworks?
Note that I don't know a lot about Rust, and I'm not familiar with the rules for Rust in the kernel, so it's possible that it's either not a problem or the problematic usages violate the kernel coding rules. (although in the latter case it doesn't help with non-kernel frameworks like SPDK)
I think async Rust is far from entering the kernel.
Edit: I realize my comment might come off as a bit snarky or uninformative to someone who isn't familiar with Rust. That was not the intention. "Async Rust" is particular framework for abstracting over various non-blocking IO operations (and more). It allows terse code to be written using a few convenient keywords, that causes a certain state machine (consisting of ordinary Rust code adhering to certain rules) to be generated, which in turn can be coupled with an "async runtime" (of which there are many) to perform the IO actions described by the code. The rules that govern the code generated by these convenient keywords, i.e. the code that the async runtimes execute, are apparently not a great fit for io_uring and the like.
However, I don't think anyone is proposing writing such code inside the kernel, nor that any of the async runtimes actually make sense in a kernel setting. The issues in this article don't exist when there is no async Rust code. Asynchronous operations can, of course, still be performed, but one has to manage that without the convenient scaffolding afforded by async Rust and the runtimes.
While it's true that the "state" of a future is only mutated in the poll() implementation, it's up to the author of the future implementation to clone/send/call the Waker provided in the context argument to signal to the executor that poll() should be called again by the executor, which I believe is how one should handle this case.
We so need a way to express cancellation safety other than documentation. This is not just an io_grind problem, you have a lot of futures in tokio that are not cancel safe. Are there some RFC of the subject?
There are async libraries like glommio, which I’m using for a new project, that avoid this I think, but they require you to factor things a little differently from tokio.
Maybe cancellation itself is problematic. There’s a reason it was dropped from threading APIs and AFAIK there is no way to externally cancel a goroutine. Goroutines are like async tasks with all the details hidden from you as it’s a higher level language.
I don't think that cancellation is inherently problematic, but it needs to be cooperative. One-sided cancellation of threads (and probably goroutines) can never work.
Cooperative cancellation can be implemented in languages that mark their suspension points explicitly in their coroutines, like Rust, Python and C++.
I think Python's asyncio models cancellation fairly well with asyncio.CancelledError being raised from the suspension points, although you need to have some discipline to use async context managers or try/finally, and to wait for cancelled tasks at appropriate places. But you can write your coroutines with the expectation that they eventually make forward progress or exit normally (via return or exception).
It looks like Rust's cancellation model is far more blunt, if you are just allowed to drop the coroutine.
> It looks like Rust's cancellation model is far more blunt, if you are just allowed to drop the coroutine.
You can only drop it if you own it (and nobody has borrowed it), which means you can only drop it at an `await` point.
This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.
I find that a bigger issue in my async Rust code is using Tokio-style async "streams", where a cancelled sender looks exactly like a clean "end of stream". In this case, I use something like:
enum StreamValue<T> {
Value(T),
End,
}
If I don't see StreamValue::End before the stream closes, then I assume the sender failed somehow and treat it as a broken stream (sort of like a Unix EPIPE error).
This can obviously be wrapped. But any wrapper still requires the sender to explictly close the stream when done, and not via an implicit Drop.
> This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.
Which limits cleanup after cancellation to be synchronous, doesn't it? I often use asynchronous cleanup logic in Python (which is the whole premise of `async with`).
Correct. Well, you can dump it into a fast sync buffer and let a background cleanup process do any async cleanup.
Sync Rust is lovely, especially with a bit of practice, and doubly so if you already care about how things are stored in memory. (And caring how things are stored is how you get speed.)
Async Rust is manageable. There's more learning curve, and you're more likely to hit an odd corner case where you need to pair for 30 minutes with the team's Rust expert.
The majority of recent Rust networking libraries are async, which is usually OK. Especially if you tend to keep your code simple anyway. But there are edge cases where it really helps to have access to Rust experience—we hit one yesterday working on some HTTP retry code, where we needed to be careful how we passed values into an async retriable block.
I don't think it's possible to get away with fundamentally no cancellation support, there are enough edge cases that need it even if most applications don't have such edge cases.
There are certain counterintuitive things that you have to learn if you want to be a "systems engineer", in a general sense, and this whole async thing has been one of the clearest lessons to me over the years of how seemingly identical things sometimes can not be abstracted over.
Here by "async" I don't so much mean async/await versus threads, but these kernel-level event interfaces regardless of which abstraction a programming language lays on top of them.
At the 30,000 foot view, all the async abstractions are basically the same, right? You just tell the kernel "I want to know about these things, wake me up when they happen." Surely the exact way in which they happen is not something so fundamental that you couldn't wrap an abstraction around all of them, right?
And to some extent you can, but the result is generally so lowest-common-denominator as to appeal to nobody.
Instead, every major change in how we handle async has essentially obsoleted the entire programming stack based on the previous ones. Changing from select to epoll was not just a matter of switching out the fundamental primitive, it tended to cascade up almost the entire stack. Huge swathes of code had to be rewritten to accommodate it, not just the core where you could do a bit of work and "just" swap out epoll for select.
Now we're doing it again with io_uring. You can't "just" swap out your epoll for io_uring and go zoomier. It cascades quite a ways up the stack. It turns out the guarantees that these async handlers provide are very different and very difficult to abstract. I've seen people discuss how to bring io_uring to Go and the answer seems to basically be "it breaks so much that it is questionable if it is practically possible". An ongoing discussion on an Erlang forum seems to imply it's not easy there (https://erlangforums.com/t/erlang-io-uring-support/765); I'd bet it reaches up "less far" into the stack but it's still a huge change to BEAM, not "just" swapping out the way async events come in. I'm sure many other similar discussions are happening everywhere with regards to how to bring io_uring into existing code, both runtimes and user-level code.
This does not mean the problem is unsolvable by any means. This is not a complaint, or a pronunciation of doom, or an exhortation to panic, or anything like that. We did indeed collectively switch from select to epoll. We will collectively switch to io_uring eventually. Rust will certainly be made to work with it. I am less certain about the ability of shared libraries to be efficiently and easily written that work in both environments, though; if you lowest-common-denominator enough to work in both you're probably taking on the very disadvantages of epoll in the first place. But programmers are clever and have a lot of motivation here. I'm sure interesting solutions will emerge.
I'm just highlighting that as you grow in your programming skill and your software architecture abilities and general system engineering, this provides a very interesting window into how abstractions can not just leak a little, but leak a lot, a long ways up the stack, much farther than your intuition may suggest. Even as I am typing this, my own intuition is still telling me "Oh, how hard can this really be?" And the answer my eyes and my experience give my intuition is, "Very! Even if I can't tell you every last reason why in exhaustive detail, the evidence is clear!" If it were "just" a matter of switching, as easy as it feels like it ought to be, we'd all already be switched. But we're not, because it isn't.
I appreciate the insight in this comment! I see your problem, and I offer an answer (I daren't call it a solution): there is no surefire way to make an interface/abstraction withstand the test of time. It just doesn't happen, even across just a few paradigm shifts, at least not without arbitrary costs to performance, observability/debuggability, ease of use, and so on. The microkernel (in the spirit of Liedtke)/exokernel philosophy tells us to focus on providing minimal, orthogonal mechanisms that just barely allow implementing the "other stuff". But unless a monolithic system is being built for one purpose, "the other stuff" isn't meaningfully different from a microkernel; it has different "hardware" but must itself impose minimally on what is to be built above it. We build layers of components with rich interactions of meaningful abstractions, building a web of dependencies and capabilities. There is no accidental complexity here in this ideal model; to switch paradigms, one must discard exactly the set of components and layers that are incompatible with the new paradigm.
Consider Linux async mechanisms. They are provided by a monolithic kernel that dictates massive swathes of what worldview a program is developed in. When select was found lacking, it took time for epoll to arrive. Then io_uring took its sweet time. When the kernel is lacking, the kernel must change, and that is painful. Now consider a hypothetical microkernel/exokernel where a program just gets bare asynchronous notifications about hardware and from other programs. Async abstractions must be built on top, in services and libraries, to make programming feasible. Say the analogous epoll library is found lacking. Someone must uproot it and perhaps lower layers and build an io_uring library instead. I will not say this is always less pain that before, although it is decidedly not the same as changing a kernel. But perhaps it is less painful in most cases. I do not think it is ever more painful. This is the essential pain brought about by stability and change.
My hot take is that the root of this issue is that the destructor side of RAII in general is a bad idea. That is, registering custom code in destructors and running them invisibly, implicitly, maybe sometimes but only if you're polite, is not and never was a good pattern.
This pattern causes issues all over the place: in C++ with headaches around destruction failure and exceptions; in C++ with confusing semantics re: destruction of incompletely-initialized things; in Rust with "async drop"; in Rust (and all equivalent APIs) in situations like the one in this article, wherein failure to remember to clean up resources on IO multiplexer cancellation causes trouble; in Java and other GC-ful languages where custom destructors create confusion and bugs around when (if ever) and in the presence of what future program state destruction code actually runs.
Ironically, two of my least favorite programming languages are examples of ways to mitigate this issue: Golang and JavaScript runtimes:
Golang provides "defer", which, when promoted widely enough as an idiom, makes destructor semantics explicit and provides simple and consistent error semantics. "defer" doesn't actually solve the problem of leaks/partial state being left around, but gives people an obvious option to solve it themselves by hand.
JavaScript runtimes go to a similar extreme: no custom destructors, and a stdlib/runtime so restrictive and thick (vis-a-vis IO primitives like sockets and weird in-memory states) that it's hard for users to even get into sticky situations related to auto-destruction.
Zig also does a decent job here, but only with memory allocations/allocators (which are ironically one of the few resource types that can be handled automatically in most cases).
I feel like Rust could have been the definitive solution to RAII-destruction-related issues, but chose instead to double down on the C++ approach to everyone's detriment. Specifically, because Rust has so much compile-time metadata attached to values in the program (mutability-or-not, unsafety-or-not, movability/copyabiliy/etc.), I often imagine a path-not-taken in which automatic destruction (and custom automatic destructor code) was only allowed for types and destructors that provably interacted only with in-user-memory state. Things referencing other state could be detected at compile time and required to deal with that state in explicit, non-automatic destructor code (think Python context-managers or drop handles requiring an explicit ".execute()" call).
I don't think that world would honestly be too different from the one we live in. The rust runtime wouldn't have to get much thicker--we'd have to tag data returned from syscalls that don't imply the existence of cleanup-required state (e.g. select(2), and allocator calls--since we could still automatically run destructors that only interact with cleanup-safe user-memory-only values), and untagged data (whether from e.g. fopen(2) or an unsafe/opaque FFI call or asm! block) would require explicit manual destruction.
This wouldn't solve all problems. Memory leaks would still be possible. Automatic memory-only destructors would still risk lockups due to e.g. pagefaults/CoW dirtying or infinite loops, and could still crash. But it would "head off at the pass" tons of issues--not just the one in the article:
Side-effectful functions would become much more explicit (and not as easily concealable with if-error-panic-internally); library authors would be encouraged to separate out external-state-containing structs from user-memory-state-containing ones; destructor errors would become synonymous with specific programmer errors related to in-memory twiddling (e.g. out of bounds accesses) rather than failures to account for every possible state of an external resource, and as a result automatic destructor errors unconditionally aborting the program would become less contentious; the surface area for challenges like "async drop" would be massively reduced or sidestepped entirely by removing the need for asynchronous destructors; destructor-related crash information would be easier to obtain even in non-unwinding environments...
Maybe I'm wrong and this would require way too much manual work on the part of users coding to APIs requiring explicit destructor calls.
I think Austral and Vale's linear typing is a good start, although it would probably have to be opt-in in practice. This goes along with explicit, manual destructors and alleviates issues like async drop. Even with automatic destructors, they can have more visibility and customizability. Exceptions are a can of worms and need to be redesigned (but not removed). I think automatic destruction doesn't have to mean oh-wait-what-do-you-mean-it-unwound-and-ran-a-destructor-uh-oh-double-exception-abort and similar very weird cases. The concept should have its own scope and purpose, same with exceptions.
The timeout is only a stand-in for the generic need to be able to cancel an acceptance loop. You could just as well want to cancel accept() when SIGINT/SIGTERM/etc is received, or when recreating the server socket, e.g. in response to a configuration change. Most server processes have a need like this.
Yet another example of async Rust being a source of unexpected ways to shoot yourself in the foot... Async advocates can argue as long as they want about "you're holding it wrong", but to me it sounds like people arguing that you can safely use C/C++ just by being "careful".
Async has its uses, but there should also be a way to ensure that a Rust stack does not use async at all, like there is for unsafe. Most codebases could do without the added complexity. There will be better ways to do concurrency in the future (hehe)
Agree. If people want to delude themselves that async is useful, that's fine. But don't inflict it on the rest of us by viral propagation throughout the dependency ecosystem.
I claim that async/await is far more basic (/fundamental/simple, not necessarily easy) than most acknowledge, but it should indeed be more composable with sync code. It is a means of interacting with asynchronous phenomena, which underlie the OS-hardware connection. The composability is necessary because no one is going to write a massive state machine for their entire program.
This is nothing to do with async Rust; monoio (and possibly other io-uring libraries) are just exposing a flawed API. My ringbahn library written in 2019 correctly handled this case by having a dropped accept future register a cancellation callback to be executed when the accept completes.
https://github.com/ringbahn/ringbahn
Doesn't this close the incoming connection, rather than allowing another pending accept to receive it?
You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.
The solution proposed in this post doesn't work, though: if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked. io-uring's async cancellation mechanism is just an optimization opportunity and doesn't synchronize anything, so it can't be relied on for correctness here. My library could have submitted a cancellation when the future drops as such an optimization, but couldn't have relied on it to ensure the accept does not complete.
> if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked.
If the accept completes before the cancel SQE is submitted, the cancel operation will fail and the runtime will have a chance to poll the CQE in place and close the fd.
Hmm, because the cancel CQE will have a reference to the CQE it was supposed to cancel? Yes, that could work.
> You're right. Looking at my actual code, instead I stored the accept to be yielded next time you call accept and only cancel an accept call if you drop the entire listener object mid-accept.
This is still a suboptimal solution as you've accepted a connection, informing the client side of this, and then killed it rather than never accepting it in the first place. (Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.)
Now it's true that "never accepting it in the first place" might not be possible with io_uring in some cases but rather than hiding that under drop the code, it should be up front about it and prevent dropping (not currently possible in rust) in a situation where there might be uncompleted in-flight requests before you've explicitly made a decision between "oh okay then, let's handle this one last request" and "I don't care, just hang up".
If you want the language to encode a liveness guarantee that you do something meaningful in response to an accept rather than just accept and close you do need linear types. I don't know any mainstream language that encodes that guarantee in its type system, whatever IO mechanism it uses.
This all feels like the abstraction level is wrong. If I think of a server as doing various tasks, one of which is to periodically pull an accepted connection off the listening socket, and I cancel that task, then, sure, the results are awkward at best and possibly wrong.
But I’ve written TCP servers and little frameworks, asynchronously, and this whole model seems wrong. There’s a listening socket, a piece of code that accepts connections, and a backpressure mechanism, and that entire thing operates as a unit. There is no cancellable entity that accepts sockets but doesn’t also own the listening socket.
Or one can look at this another way: after all the abstractions and libraries are peeled back, the example in the OP is setting a timeout and canceling an accept when the timeout fires. That’s rather bizarre — surely the actual desired behavior is to keep listening (and accepting when appropriate) and do to the other timed work concurrently.
It just so happens that, at the syscall level, a nonblocking (polled, selected, epolled, or even just called at intervals) accept that hasn’t completed is a no-op, so canceling it doesn’t do anything, and the example code works. But it would fail in a threaded, blocking model, it would fail in an inetd-like design, and it fails with io_uring. And I really have trouble seeing linear types as the solution — the whole structure is IMO wrong.
(Okay, maybe a more correct structure would have you “await connection_available()” and then “pop a connection”, and “pop a connection” would not be async. And maybe a linear type system would prevent one from being daft, successfully popping a connection, and then dropping it by accident.)
> maybe a more correct structure would have you “await connection_available()” and then “pop a connection”
This is the age-old distinction between a proactor and reactor async design. You can normally implement one abstraction of top of the other, but the conversion is sometimes leaky. It happens that the underlying OS "accept" facility is reactive and it doesn't map well to a pure async accept.
I’m not sure I agree. accept() pops from a queue. You can wait—and-pop or you can pop-or-fail. I guess the former fits in a proactor model and the latter fits in a reactor model, but I think that distinction misses the point a bit. Accepting sockets works fine in either model.
It breaks down in a context where you do an accept that can be canceled and you don’t handle it intelligently. In a system where cancellation is synchronous enough that values won’t just disappear into oblivion, one could arrange for a canceled accept that succeeded to put the accepted socket on a queue associated with the listening socket, fine. But, in general, the operation “wait for a new connection and irreversibly claim it as mine IMO just shouldn’t be done in a cancellable context, regardless of whether it’s a “reactor” or a “proactor”. The whole “select and, as one option, irrevocably claim a new connection” code path in the OP seems suspect to me, and the fact that it seems to work under epoll doesn’t really redeem it in my book.
> Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.
listen(2) takes a backlog parameter that is the number of queued (which I think it means ack'd) but not yet popped (i.e. listen'd) connections.
I look forward to your blog post running the code provided by the article and rebutting it
And I mean that honestly, no sarcasm at all!
You're replying to the author of ringbahn, whom also happens writes some of the best technical blog posts ever to grace this website on these topics.
It is very difficult not to conclude you're either very ignorant, or trolling.
What makes you think I don't know who I'm talking to? All I was trying to do was encourage another blog post. I literally went out of my way to try to sound nice and preemptively dismiss the chance that I was interpreted sarcastically.
https://news.ycombinator.com/newsguidelines.html
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Alternative explanation: They're a fan!
The rest of this blog discusses how to continue processing operations after cancellation fails, which is blocked by the Rust abstraction. Yes, not everyone (probably very few) defines this as a safety issue, I wrote about this at the end of the blog.
I don't consider Yosh Wuyts's concept of "halt safety" coherent, meaningful or worth engaging with. It's true that linear types would enable the encoding of additional liveness guarantees that Rust's type system as it exists cannot encode, but this doesn't have anything to do with broken io-uring libraries leaking resources.
Continuing process after cancellation failure is a challenge I face in my actual work, and I agree that "halt-safety" lacks definition and context. I have also learned and been inspired a lot from your blogs, I appreciate it.
Agree. When I hear “I wish Rust was Haskell” I assume the speaker is engaged in fantasy, not in engineering. The kernel is written in C and seems to be able to manage just fine. Problem is not Rust. Problem is wishing Rust was Haskell.
Well, it's "about" async Rust and io-uring inasmuch as they represent incompatible paradigms.
Rust assumes as part of its model that "state only changes when polled". Which is to say, it's not really "async" at all (none of these libraries are), it's just a framework for suspending in-progress work until it's ready. But "it's ready" is still a synchronous operation.
But io-uring is actually async. Your process memory state is being changed by the kernel at moments that have nothing to do with the instruction being executed by the Rust code.
You are completely incorrect. You're responding to a comment in which I link to a library which handles this correctly, how could you persist in asserting that they are incompatible paradigms? This is the kind of hacker news comment that really frustrates me, it's like you don't care if you are right or wrong.
Rust does not assume that state changes only when polled. Consider a channel primitive. When a message is put into a channel at the send end, the state of that channel changes; the task waiting to receive on that channel is awoken and finds the state already changed when it is polled. io-uring is really no different here.
> Rust does not assume that state changes only when polled.
I will replace to more exact description about this, thanks.
What you're describing is a synchronous process, though! ("When a message is put..."). That's the disconnect in the linked article. Two different concepts of asynchrony: one has to do with multiple contexts changing state without warning, the other (what you describe) is about suspending threads contexts "until" something happens.
Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?
With io-uring the kernel writes CQEs into a ring buffer in shared memory and the user program reads them: its literally just a bounded channel, the same atomic synchronizations, the same algorithm. There is no difference whatsoever.
The io-uring library is responsible for reading CQEs from that ring buffer and then dispatching them to the task that submitted the SQE they correspond to. If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE. According to this blog post, monoio fails to do so. That's all that's happening here.
> Again you are wrong. A forum full of people who just like to hear themselves talk. I guess it makes you feel good in some way?
I think you're being unduly harsh here. There are a variety of voices here, of various levels of expertise. If someone says something you think is incorrect but it seems that they are speaking in good faith then the best way to handle the situation is to politely provide a correct explanation.
If you really think they are in bad faith then calmly call them out on it and leave the conversation.
the post only talks about "future state", maybe I'm not clearly to point out this. with epoll, accept syscall and future state changing is happened in the same polling, which io_uring is not. Once accept syscall is complete, future has already advanced to complete, but actually it is not at that moment in the real world Rust.
It's true, there's a necessary layer of abstraction with io-uring that doesn't exist with epoll.
With epoll, the reactor just maps FDs to Wakers, and then wakes whatever Waker is waiting on that FD. Then that task does the syscall.
With io-uring, instead the reactor is reading completion events from a queue. It processes those events, sets some state, and then wakes those tasks. Those tasks find the result of the syscall in that state that the reactor set.
This is the difference between readiness (epoll) and completion (io-uring): with readiness the task wakes when the syscall is ready to be performed without blocking, with completion the task wakes when the syscall is already complete.
When a task loses interest in an event in epoll, all that happens is it gets "spuriously awoken," so it sees there's nothing for it to do and goes back to sleep. With io-uring, the reactor needs to do more: when a task has lost interest in an incomplete event, that task needs to set the reactor into a state where instead of waking it, it will clean up the resources owned by the completion event. In the case of accept, this means closing that FD. According to your post, monoio fails to do this, and just spuriously wakes up the task, leaking the resource.
The only way this relates to Rust's async model is that all futures in Rust are cancellable, so the reactor needs to handle the possibility that interest in a syscall is cancelled or the reactor is incorrect. But its completely possible to implement an io-uring reactor correctly under Rust's async model, this is just a requirement to do so.
> The title of this blog might sound a bit dramatic, but everyone has different definitions and understandings of "safety."
Still in Rust community "safety" is used in a very specific understanding, so I don't think it is correct to use any definition you like while speaking about Rust. Or at least, the article should start with your specific definition of safety/unsafety.
I don't want to reject the premise of the article, that this kind of safety is very important, but for Rust unsafety without using "unsafe" is much more important that an OS dying from leaked connections. I have read through the article looking for rust's kind of unsafety and I was found that I was tricked. It is very frustrating, it looks to me as a lie with some lame excuses afterwards.
It's definitely clickbaity. The author knew exactly what they were doing. Time to flag, hide and move on.
I apologize for not using a good title, but I think the issues I listed (especially my argument that we should break issues down to bugs in runtime and the limitations of Rust abstractions) are worth discussing, even if there are many people who argue otherwise
Agreed, though a better title would probably not use the term "safe" unqualified (e.g. "Async rust with io_uring leaks resources (and risks corrupting program state").
So in this case it is still a form of safety that’s well-defined in rust: cancel safety. The io-uring library doesn’t have the same cancel safety guarantees that everyone is used to in epoll libraries. In Tokio, the cancel safety of `accept` is well documented even though it works the way you’d expect, but in monoio, it’s literally just documented as `Accept` with no mention of the cancel safety issues when using that function.
Not leaking resources is a part of Rust's safety model, isn't it?
No, leaking is explicitly safe in rust: https://doc.rust-lang.org/nomicon/leaking.html
Interesting. I would have thought that leak-free is part of the premise, since you can very well right C or C++ with a guarantee of no use after free at least, assuming you don't care about memory leaks.
Notably, io_uring syscall has been a significant source of vulnerabilities. Last year, Google security team decided to disable it in their products (ChromeOS, Android, GKE) and production servers [1].
Containerd maintainers soon followed Google recommendations and updated seccomp profile to disallow io_uring calls [2].
io_uring was called out specifically for exposing increased attack surface by kernel security team as well long before G report was released [3].
Seems like less of a rust issue and more of a bug(s) in io_uring? I suppose user space apps can provide bandaid fix but ultimately needs to be handled at kernel.
[1] https://security.googleblog.com/2023/06/learnings-from-kctf-...
[2] https://github.com/containerd/containerd/pull/9320
[3] https://lwn.net/Articles/902466/
> Seems like less of a rust issue and more of a bug(s) in io_uring?
I'm working with io_uring currently and have to disagree hard on that one; io_uring definitely has issues, but the one here is that it's being used incorrectly, not something wrong with io_uring itself.
The io_uring issues overall also disaggregate in mostly 2 overall categories:
- lack of visibility into io_uring operations since they are no longer syscalls. This is an issue of adding e.g. seccomp and ptrace equivalents into io_uring. It's not something I'd even call a vulnerability, more of a missing feature.
- implementation correctness and concurrency issues due to its asynchronicity. It's just hard to do this correctly and bugs are being found and fixed. Some are security vulnerabilities. I'd call this a question of time for it to get stable and ready but I have no reason to believe this won't happen.
> but the one here is that it's being used incorrectly
Being ALLOWED to be used badly, is the major cause of unsafety.
And consider that all the reports you reply to are by serious teams. NOT EVEN THEM succeed.
That is the #1 definition of
> something wrong with io_uring itself
Strongly disagree. At the level of io_uring (syscalls/syscall orchestration), it is expected that available tools are prone to mis-use, and that libraries/higher layers will provide abstractions around them to mitigate that risk.
This isn't like the Rust-vs-C argument, where the claim is that you should prefer the option of two equivalently-capable solutions in a space that doesn't allow mis-use.
This is more like assembly language, or the fact that memory inside kernel rings is flat and vulnerable: those are low-level tools to facilitate low-level goals with a high risk of mis-use, and the appropriate mitigation for that risk is to build higher-level tools that intercept/prevent that mis-use.
Those are separate issues. The blog post is about using it properly in userspace, google's concerns are about kernel security bugs.
That is all well and true, and the vulnerabilities are getting fixed, but that is off-topic to the posted article.
The article is more about the Rust io_uring async implementation breaking assumption that Rust's async makes, in that a Future can only get modified when it's poll()-ed.
I'm guessing that assumption came from an expectation that all async runtimes live in userland, and this newfangled kernel-backed runtime does things on its own inside the kernel, thus breaking the original assumption.
The problem has been known from the beginning, because async I/O on Windows has the same issues as io_uring.
Rust went with poll-based API and synchronous cancellation design anyway, because that fits ownership and borrowing.
Making async cancellation work safely even in presence of memory leaks (destructors don't always run) and panics remains an unsolved design problem.
I mean, it’s only a problem if your design is based on the Future having exclusive ownership of its read buffer, but io\_uring assumes a kind of shared ownership. The “obvious” solution is to encode that ownership model in the design, which implies some kind of cancellation mechanism. C and C++ programs have to do that too.
This reference at the bottom of article was very interesting to me: https://without.boats/blog/io-uring/
"So I think this is the solution we should all adopt and move forward with: io-uring controls the buffers, the fastest interfaces on io-uring are the buffered interfaces, the unbuffered interfaces make an extra copy. We can stop being mired in trying to force the language to do something impossible. But there are still many many interesting questions ahead."
It's not about memory safety, as you might assume from the title. There's no soundness bug involved.
Rust not only provides memory safety, it also promises at least I/O safety in the future: https://rust-lang.github.io/rfcs/3128-io-safety.html
Resource leaks have nothing to do with safety. That's true both for memory safety and i/o safety. See for yourself with `mem::forget(File::open(...)?)`
Yeah the article is giving what looks like a bad definition:
According to: https://rust-lang.github.io/rfcs/3128-io-safety.html
Rust’s standard library almost provides I/O safety, a guarantee that if one part of a program holds a raw handle privately, other parts cannot access it.
According to the article:
I/O Safety: Ensuring that accepted TCP streams are properly closed without leaking connections.
These are not the same definition.
As I've mentioned several times [1], in Rust, the word "safety" is being abused to the point of causing confusion, and it looks like this is another instance of that.
[1] https://news.ycombinator.com/item?id=31726723
Thank you! I will update the post and fix it.
Yeah, "safe Rust" is officially allowed to leak memory and other resources.
- The easiest way to do this is mem::forget, a safe function which exists to leak memory deliberately.
- The most common real way to leak memory is to create a loop using the Rc<T> or Arc<T> reference count types. I've never seen this in any of my company's Rust code, but we don't write our own cyclic data structures, either. This is either a big deal to you, or a complete non-issue, depending on your program architecture.
Basically, "safe Rust" aims to protect you from memory corruption, undefined behavior, and data races between threads. It does not protect you from leaks, other kinds of race conditions, or (obviously) logic bugs.
Leaking file descriptors is safe, at least as far as the IO-safety RFC defines it. It's just a regular bug.
Where do you conclude "Rust promises I/O safety in the future"? An RFC is not a promise to do anything... it may represent a desire... a possibility, but you taking that leap and implying a promise is a flagrant misrepresentation.
Now let's take "the future" part... you seem to be impugning Async Rust for something it's not even purported to do in the present. What's the point of this?
You found a bug in monoio it seems... I don't see the argument you've presented as supporting the premise that "Async Rust is not safe".
I don't get it. What's the ideal scenario here?
That going to the sleep branch of the select should cancel the accept? Will cancelling the accept terminate any already-accepted connections? Shouldn't it be delayed instead?
Shouldn't newly accepted connections be dropped only if the listener is dropped, rather than when the listener.accept() future is dropped? If listener.accept() is dropped, the queue should be with the listener object, and thus the event should still be available in that queue on the next listener.accept().
This seems more like a bug with the runtime than anything.
the ideal scenario is like cancellable io provide by monoio, I write an example of this in the blog: https://github.com/ethe/io-uring-is-not-cancellation-safe/bl... . However, it has lots of limitation, and do not have a perfect way to do this at the moment.
That's an implementation detail. What's the user-facing behavior? What should happen with mistakenly-accepted connections?
Even the blog admits that cancellation of any kind, is racing with the kernel which might complete the accept request anyway. Even if you call `.cancel()`, the queue might have an accepted connection FD in it. Even if it doesn't, it might do by the time the kernel acknowledges the cancellation.
So you now have a mistakenly-accepted connection. What do you do with it? Drop it? That seems like the wrong answer, whoever writes a loop like the one in the blog will definitely not expect some of the connections mysteriously being dropped.
Okay, looks like withoutboats gave the answer to this in another thread [1], and that seems like the right answer. The accept() future being dropped must not result in any cancellation of any kind, unless the listener itself is also dropped.
This is an implementation issue with monoio that just needs more polishing. And given how hard io_uring is to get right, monoio should be given that time before being chosen to be used in production for anything.
[1] https://news.ycombinator.com/item?id=41994308
I don't think the operation that completes after cancellation failed is "mistakenly-accepted," it should be handled in the normal way, but I admit that there are lots of people don't agree that
This aspect of io_uring does affect a lot of surface APIs, as I have experienced at work. At least for me I didn't have to worry much about borrowing though.
Hmm. Does it? Python's futures have an explicit .cancel() operation. And the C io_uring usage I'm looking at knows to cancel events too…
It's really that Rust might've made a poor choice here, as the article points out:
> Async Rust makes a few core assumptions about futures:
> 1. The state of futures only change when they are polled.
> 2. Futures are implicitly cancellable by simply never polling them again.
But at the same time, maybe this is just that Rust's Futures need to be used different here, in conjunction with a separate mechanism to manage the I/O operation that knows things need to be cancelled?
That part of the article is kinda irrelevant in my opinion. Futures do require polling to move forward, but polling can be forced by an external signal (otherwise the whole future model wouldn't work!). Therefore io_uring can be safely implemented by having central worker threads which then signal outstanding futures; that was how I ended up doing at work as well. So the article actually seems to ask whether such out-of-band mechanism can be entirely prevented or not.
The sibling comment to yours points out cancelling Futures is dropping Futures, what's your experience / do you think that would work to prevent needing the out-of-band mechanism?
That's backwards... Rust's way to do cancellation is simply to drop the future (i.e. let it deallocated). There is one big caveat here though, namely the lack of async drop as others pointed out.
In the current Rust io_uring-like stuffs can be safely implemented with an additional layer of abstraction. Some io_uring operations can be ongoing when it looks fine to borrow the buffer, sure. Your API just has to ensure that it is not possible to borrow until all operations are finished then! Maybe it can error, or you can require something like `buf.borrow().await`. This explicit borrowing is not an alien concept in Rust (cf. RefCell etc.) and probably the best design at the moment, but it does need dynamic bookkeeping which some may want to eliminate.
> First of all, we are fortunate that the I/O safety problem can be addressed now, which safe Rust aims to ensure this in the future. Rust provides the Drop trait to define custom behavior when a value is cleaned up. Thus, we can do something like this...
> We just need to encourage async runtimes to implement this fix.
This likely needs async drop if you need to perform a follow up call to cancel the outstanding tasks or closing the open sockets. Async Drop is currently experimental:
https://github.com/rust-lang/compiler-team/issues/727
The assumption made by rust is that a future is cancelled when it is dropped.
Ah, Thanks, that makes sense, but then I don't understand how this isn't just a bug in these Rust runtimes. As in: the drop codepath on the future needs to not only submit the cancellation SQE into io_uring, it also needs to still process CQEs from the original request that pop up before the CQE for the cancellation…
NB: I have only done a little bit of Rust, but am hoping to move there in the future — but I am working on C code interfacing io_uring… I will say doing this correctly does in fact require a bit of brainpower when writing that code.
I am not well versed in the async things as of late, but one complication is that the drop implementation is a blocking one. This could easily lead to deadlocks. Or the drop implementation could spawn an async task to clean up after itself later.
I have tried to learn rust and borrow checker is no problem but I can't get lifetimes and then Rc, Box, Arc Pinning along with async Rust are a whole another story.
Having programmed in raw C, I know Rust is more like Typescript if you once try it after writing Javascript, you can't go back for anything serious in plain Javascript. You would want to have some guard rails better than having no guard rails.
Try embedded rust: Get an RP2040 board and fool around with that. It'll make a lot more sense to you if the parts you don't understand are encapsulation types like RC, Box, and Arc because those aren't really used in embedded rust!
Lifetimes are still used but not nearly as much.
Since io_uring has similar semantics to just about every hardware device ever (e.g. NVMe submission and completion queues), are there any implications of this for Rust in the kernel? Or in SPDK and other user-level I/O frameworks?
Note that I don't know a lot about Rust, and I'm not familiar with the rules for Rust in the kernel, so it's possible that it's either not a problem or the problematic usages violate the kernel coding rules. (although in the latter case it doesn't help with non-kernel frameworks like SPDK)
I think async Rust is far from entering the kernel.
Edit: I realize my comment might come off as a bit snarky or uninformative to someone who isn't familiar with Rust. That was not the intention. "Async Rust" is particular framework for abstracting over various non-blocking IO operations (and more). It allows terse code to be written using a few convenient keywords, that causes a certain state machine (consisting of ordinary Rust code adhering to certain rules) to be generated, which in turn can be coupled with an "async runtime" (of which there are many) to perform the IO actions described by the code. The rules that govern the code generated by these convenient keywords, i.e. the code that the async runtimes execute, are apparently not a great fit for io_uring and the like.
However, I don't think anyone is proposing writing such code inside the kernel, nor that any of the async runtimes actually make sense in a kernel setting. The issues in this article don't exist when there is no async Rust code. Asynchronous operations can, of course, still be performed, but one has to manage that without the convenient scaffolding afforded by async Rust and the runtimes.
While it's true that the "state" of a future is only mutated in the poll() implementation, it's up to the author of the future implementation to clone/send/call the Waker provided in the context argument to signal to the executor that poll() should be called again by the executor, which I believe is how one should handle this case.
We so need a way to express cancellation safety other than documentation. This is not just an io_grind problem, you have a lot of futures in tokio that are not cancel safe. Are there some RFC of the subject?
> // we loose the chance to handle the previous one.
lose?
Thanks, I will fix it.
There are async libraries like glommio, which I’m using for a new project, that avoid this I think, but they require you to factor things a little differently from tokio.
Maybe cancellation itself is problematic. There’s a reason it was dropped from threading APIs and AFAIK there is no way to externally cancel a goroutine. Goroutines are like async tasks with all the details hidden from you as it’s a higher level language.
I don't think that cancellation is inherently problematic, but it needs to be cooperative. One-sided cancellation of threads (and probably goroutines) can never work.
Cooperative cancellation can be implemented in languages that mark their suspension points explicitly in their coroutines, like Rust, Python and C++.
I think Python's asyncio models cancellation fairly well with asyncio.CancelledError being raised from the suspension points, although you need to have some discipline to use async context managers or try/finally, and to wait for cancelled tasks at appropriate places. But you can write your coroutines with the expectation that they eventually make forward progress or exit normally (via return or exception).
It looks like Rust's cancellation model is far more blunt, if you are just allowed to drop the coroutine.
> It looks like Rust's cancellation model is far more blunt, if you are just allowed to drop the coroutine.
You can only drop it if you own it (and nobody has borrowed it), which means you can only drop it at an `await` point.
This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.
I find that a bigger issue in my async Rust code is using Tokio-style async "streams", where a cancelled sender looks exactly like a clean "end of stream". In this case, I use something like:
If I don't see StreamValue::End before the stream closes, then I assume the sender failed somehow and treat it as a broken stream (sort of like a Unix EPIPE error).This can obviously be wrapped. But any wrapper still requires the sender to explictly close the stream when done, and not via an implicit Drop.
> This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.
Which limits cleanup after cancellation to be synchronous, doesn't it? I often use asynchronous cleanup logic in Python (which is the whole premise of `async with`).
Correct. Well, you can dump it into a fast sync buffer and let a background cleanup process do any async cleanup.
Sync Rust is lovely, especially with a bit of practice, and doubly so if you already care about how things are stored in memory. (And caring how things are stored is how you get speed.)
Async Rust is manageable. There's more learning curve, and you're more likely to hit an odd corner case where you need to pair for 30 minutes with the team's Rust expert.
The majority of recent Rust networking libraries are async, which is usually OK. Especially if you tend to keep your code simple anyway. But there are edge cases where it really helps to have access to Rust experience—we hit one yesterday working on some HTTP retry code, where we needed to be careful how we passed values into an async retriable block.
I don't think it's possible to get away with fundamentally no cancellation support, there are enough edge cases that need it even if most applications don't have such edge cases.
FWIW, this was also painful to do across threads in our C event loop, but there was no way around the fact that we needed it (cf. https://github.com/FRRouting/frr/blob/56d994aecab08b9462f2c8... )
Who is Barbara?
https://rust-lang.github.io/wg-async/vision/characters.html#...
User B
There are certain counterintuitive things that you have to learn if you want to be a "systems engineer", in a general sense, and this whole async thing has been one of the clearest lessons to me over the years of how seemingly identical things sometimes can not be abstracted over.
Here by "async" I don't so much mean async/await versus threads, but these kernel-level event interfaces regardless of which abstraction a programming language lays on top of them.
At the 30,000 foot view, all the async abstractions are basically the same, right? You just tell the kernel "I want to know about these things, wake me up when they happen." Surely the exact way in which they happen is not something so fundamental that you couldn't wrap an abstraction around all of them, right?
And to some extent you can, but the result is generally so lowest-common-denominator as to appeal to nobody.
Instead, every major change in how we handle async has essentially obsoleted the entire programming stack based on the previous ones. Changing from select to epoll was not just a matter of switching out the fundamental primitive, it tended to cascade up almost the entire stack. Huge swathes of code had to be rewritten to accommodate it, not just the core where you could do a bit of work and "just" swap out epoll for select.
Now we're doing it again with io_uring. You can't "just" swap out your epoll for io_uring and go zoomier. It cascades quite a ways up the stack. It turns out the guarantees that these async handlers provide are very different and very difficult to abstract. I've seen people discuss how to bring io_uring to Go and the answer seems to basically be "it breaks so much that it is questionable if it is practically possible". An ongoing discussion on an Erlang forum seems to imply it's not easy there (https://erlangforums.com/t/erlang-io-uring-support/765); I'd bet it reaches up "less far" into the stack but it's still a huge change to BEAM, not "just" swapping out the way async events come in. I'm sure many other similar discussions are happening everywhere with regards to how to bring io_uring into existing code, both runtimes and user-level code.
This does not mean the problem is unsolvable by any means. This is not a complaint, or a pronunciation of doom, or an exhortation to panic, or anything like that. We did indeed collectively switch from select to epoll. We will collectively switch to io_uring eventually. Rust will certainly be made to work with it. I am less certain about the ability of shared libraries to be efficiently and easily written that work in both environments, though; if you lowest-common-denominator enough to work in both you're probably taking on the very disadvantages of epoll in the first place. But programmers are clever and have a lot of motivation here. I'm sure interesting solutions will emerge.
I'm just highlighting that as you grow in your programming skill and your software architecture abilities and general system engineering, this provides a very interesting window into how abstractions can not just leak a little, but leak a lot, a long ways up the stack, much farther than your intuition may suggest. Even as I am typing this, my own intuition is still telling me "Oh, how hard can this really be?" And the answer my eyes and my experience give my intuition is, "Very! Even if I can't tell you every last reason why in exhaustive detail, the evidence is clear!" If it were "just" a matter of switching, as easy as it feels like it ought to be, we'd all already be switched. But we're not, because it isn't.
I appreciate the insight in this comment! I see your problem, and I offer an answer (I daren't call it a solution): there is no surefire way to make an interface/abstraction withstand the test of time. It just doesn't happen, even across just a few paradigm shifts, at least not without arbitrary costs to performance, observability/debuggability, ease of use, and so on. The microkernel (in the spirit of Liedtke)/exokernel philosophy tells us to focus on providing minimal, orthogonal mechanisms that just barely allow implementing the "other stuff". But unless a monolithic system is being built for one purpose, "the other stuff" isn't meaningfully different from a microkernel; it has different "hardware" but must itself impose minimally on what is to be built above it. We build layers of components with rich interactions of meaningful abstractions, building a web of dependencies and capabilities. There is no accidental complexity here in this ideal model; to switch paradigms, one must discard exactly the set of components and layers that are incompatible with the new paradigm.
Consider Linux async mechanisms. They are provided by a monolithic kernel that dictates massive swathes of what worldview a program is developed in. When select was found lacking, it took time for epoll to arrive. Then io_uring took its sweet time. When the kernel is lacking, the kernel must change, and that is painful. Now consider a hypothetical microkernel/exokernel where a program just gets bare asynchronous notifications about hardware and from other programs. Async abstractions must be built on top, in services and libraries, to make programming feasible. Say the analogous epoll library is found lacking. Someone must uproot it and perhaps lower layers and build an io_uring library instead. I will not say this is always less pain that before, although it is decidedly not the same as changing a kernel. But perhaps it is less painful in most cases. I do not think it is ever more painful. This is the essential pain brought about by stability and change.
My hot take is that the root of this issue is that the destructor side of RAII in general is a bad idea. That is, registering custom code in destructors and running them invisibly, implicitly, maybe sometimes but only if you're polite, is not and never was a good pattern.
This pattern causes issues all over the place: in C++ with headaches around destruction failure and exceptions; in C++ with confusing semantics re: destruction of incompletely-initialized things; in Rust with "async drop"; in Rust (and all equivalent APIs) in situations like the one in this article, wherein failure to remember to clean up resources on IO multiplexer cancellation causes trouble; in Java and other GC-ful languages where custom destructors create confusion and bugs around when (if ever) and in the presence of what future program state destruction code actually runs.
Ironically, two of my least favorite programming languages are examples of ways to mitigate this issue: Golang and JavaScript runtimes:
Golang provides "defer", which, when promoted widely enough as an idiom, makes destructor semantics explicit and provides simple and consistent error semantics. "defer" doesn't actually solve the problem of leaks/partial state being left around, but gives people an obvious option to solve it themselves by hand.
JavaScript runtimes go to a similar extreme: no custom destructors, and a stdlib/runtime so restrictive and thick (vis-a-vis IO primitives like sockets and weird in-memory states) that it's hard for users to even get into sticky situations related to auto-destruction.
Zig also does a decent job here, but only with memory allocations/allocators (which are ironically one of the few resource types that can be handled automatically in most cases).
I feel like Rust could have been the definitive solution to RAII-destruction-related issues, but chose instead to double down on the C++ approach to everyone's detriment. Specifically, because Rust has so much compile-time metadata attached to values in the program (mutability-or-not, unsafety-or-not, movability/copyabiliy/etc.), I often imagine a path-not-taken in which automatic destruction (and custom automatic destructor code) was only allowed for types and destructors that provably interacted only with in-user-memory state. Things referencing other state could be detected at compile time and required to deal with that state in explicit, non-automatic destructor code (think Python context-managers or drop handles requiring an explicit ".execute()" call).
I don't think that world would honestly be too different from the one we live in. The rust runtime wouldn't have to get much thicker--we'd have to tag data returned from syscalls that don't imply the existence of cleanup-required state (e.g. select(2), and allocator calls--since we could still automatically run destructors that only interact with cleanup-safe user-memory-only values), and untagged data (whether from e.g. fopen(2) or an unsafe/opaque FFI call or asm! block) would require explicit manual destruction.
This wouldn't solve all problems. Memory leaks would still be possible. Automatic memory-only destructors would still risk lockups due to e.g. pagefaults/CoW dirtying or infinite loops, and could still crash. But it would "head off at the pass" tons of issues--not just the one in the article:
Side-effectful functions would become much more explicit (and not as easily concealable with if-error-panic-internally); library authors would be encouraged to separate out external-state-containing structs from user-memory-state-containing ones; destructor errors would become synonymous with specific programmer errors related to in-memory twiddling (e.g. out of bounds accesses) rather than failures to account for every possible state of an external resource, and as a result automatic destructor errors unconditionally aborting the program would become less contentious; the surface area for challenges like "async drop" would be massively reduced or sidestepped entirely by removing the need for asynchronous destructors; destructor-related crash information would be easier to obtain even in non-unwinding environments...
Maybe I'm wrong and this would require way too much manual work on the part of users coding to APIs requiring explicit destructor calls.
But heck, I can dream, can't I?
I think Austral and Vale's linear typing is a good start, although it would probably have to be opt-in in practice. This goes along with explicit, manual destructors and alleviates issues like async drop. Even with automatic destructors, they can have more visibility and customizability. Exceptions are a can of worms and need to be redesigned (but not removed). I think automatic destruction doesn't have to mean oh-wait-what-do-you-mean-it-unwound-and-ran-a-destructor-uh-oh-double-exception-abort and similar very weird cases. The concept should have its own scope and purpose, same with exceptions.
How common of a pattern is it to accept in a loop but also on a timeout so that you can pre-empt and go do some other work?
The timeout is only a stand-in for the generic need to be able to cancel an acceptance loop. You could just as well want to cancel accept() when SIGINT/SIGTERM/etc is received, or when recreating the server socket, e.g. in response to a configuration change. Most server processes have a need like this.
<3
Yet another example of async Rust being a source of unexpected ways to shoot yourself in the foot... Async advocates can argue as long as they want about "you're holding it wrong", but to me it sounds like people arguing that you can safely use C/C++ just by being "careful".
Async has its uses, but there should also be a way to ensure that a Rust stack does not use async at all, like there is for unsafe. Most codebases could do without the added complexity. There will be better ways to do concurrency in the future (hehe)
Agree. If people want to delude themselves that async is useful, that's fine. But don't inflict it on the rest of us by viral propagation throughout the dependency ecosystem.
I claim that async/await is far more basic (/fundamental/simple, not necessarily easy) than most acknowledge, but it should indeed be more composable with sync code. It is a means of interacting with asynchronous phenomena, which underlie the OS-hardware connection. The composability is necessary because no one is going to write a massive state machine for their entire program.
dude, machine code generated with gcc/clang is not safe in the first place. This is only the tip of the iceberg.