Disadvantages don't list performance hit for proxying every operation through another indirection layer. Can this sort of interface be implemented with zero overhead in Rust?
Most approaches, I assume, will leverage conditional compilation: When (deterministic simulation) testing, use the deterministic async runtime. Otherwise, use the default runtime. That means there's no (runtime) overhead at the cost of increased complexity.
I'm using DST in a personal project. My biggest issue is that significant parts of the ecosystem either require or prefer your runtime to be tokio. To deal with that, I re-implemented most of tokio's API on top of my DST runtime. Running my DST tests involves patching dependencies which can get messy.
In development, you import Loom's mutex. In production, you import a regular mutex. This of course has zero overhead, but the simulation testing itself is usually quite slow. Only one thread can execute at a time, and many iterations are required.
I would expect it to be possible depending on how you do it. I would think you just instantiate a different set of interfaces for DST while keeping production code running on a different thing. It’s a little trickier if you also want DST coverage of the executor itself.
With antithesis that’s all guaranteed of course since you’re running on a VM and the abstraction is a lot lower level.
Reminds me of FoundationDB and Antithesis[0].
[0]: https://www.warpstream.com/blog/deterministic-simulation-tes...
Disadvantages don't list performance hit for proxying every operation through another indirection layer. Can this sort of interface be implemented with zero overhead in Rust?
Most approaches, I assume, will leverage conditional compilation: When (deterministic simulation) testing, use the deterministic async runtime. Otherwise, use the default runtime. That means there's no (runtime) overhead at the cost of increased complexity.
I'm using DST in a personal project. My biggest issue is that significant parts of the ecosystem either require or prefer your runtime to be tokio. To deal with that, I re-implemented most of tokio's API on top of my DST runtime. Running my DST tests involves patching dependencies which can get messy.
See Tokio's Loom as an example: https://github.com/tokio-rs/loom
In development, you import Loom's mutex. In production, you import a regular mutex. This of course has zero overhead, but the simulation testing itself is usually quite slow. Only one thread can execute at a time, and many iterations are required.
I would expect it to be possible depending on how you do it. I would think you just instantiate a different set of interfaces for DST while keeping production code running on a different thing. It’s a little trickier if you also want DST coverage of the executor itself.
With antithesis that’s all guaranteed of course since you’re running on a VM and the abstraction is a lot lower level.