bacon is the superset of this with multiple types of custom jobs and job supervision supported. Although setup for Rust OOTB, it's easy to change it to be used for anything else and change key mappings.
I was using different file watching methods for personal projects over the years and they all suffered in some way. So I built a really simplistic one for my use cases.
Nice. I built something similar but to be like the Air tool but instead of being only for Golang, it was for all my artifacts in a project. Very unopinionated, fast and light weight:
So file A gets saved... a rebuild starts... and now file B gets saved a few seconds later.
What do you do? Do you kill the build and start a new one? Do you wait for it to finish?
What about the race conditions - what if half the build process sees the old contents and the other half sees the new contents of some of the files - do you contaminate the output/cache? Do you even detect it to tell the user?
All of those issues can be solved by doing an import of the changed file into the build system's content addressed store, and creating a new version of the entire input tree. You also don't need to choose between cancelling, waiting, or dropping. You can do 2 builds simultaneously, and anything consuming results can show the user the first one until a more recent one is available. If the builds are at all similar, then the similar components can be deduplicated at runtime.
These techniques are used in a build system that I work on[0]. Although it does not do automatic rebuilds like Poltergeist.
> by doing an import of the changed file into the build system's content addressed store, and creating a new version of the entire input tree.
That's going to be unusably slow and heavyweight for automatic rebuilds on a large repo. Maybe if you optimize it for a specific COW filesystem implementation that overlays things cleverly, it'd be able to scale. Or if your build tree avoids large directories and all your build tools handle symlinks fine, then you could symlink most things that don't change quickly. But I absolutely do not see this working on a large repo with the everyday filesystems people use. Not for a generic build system that allows arbitrary commands, anyway.
> You also don't need to choose between cancelling, waiting, or dropping. You can do 2 builds simultaneously
Do you have infinite CPU and RAM and money and time or something? Or are you just compiling Hello World programs? In my universe with limited resources this would not work... at all.
> These techniques are used in a build system that I work on[0].
And how exactly do you scale it the way you're describing with automatic rebuilds?
> Although it does not do automatic rebuilds like Poltergeist.
bacon is the superset of this with multiple types of custom jobs and job supervision supported. Although setup for Rust OOTB, it's easy to change it to be used for anything else and change key mappings.
https://github.com/Canop/bacon
I was using different file watching methods for personal projects over the years and they all suffered in some way. So I built a really simplistic one for my use cases.
https://git.sr.ht/~mh/remake
Nice. I built something similar but to be like the Air tool but instead of being only for Golang, it was for all my artifacts in a project. Very unopinionated, fast and light weight:
https://github.com/panyam/devloop
I have nothing useful to say except I dig the name choice. Very fun.
If only life was so simple.
So file A gets saved... a rebuild starts... and now file B gets saved a few seconds later.
What do you do? Do you kill the build and start a new one? Do you wait for it to finish?
What about the race conditions - what if half the build process sees the old contents and the other half sees the new contents of some of the files - do you contaminate the output/cache? Do you even detect it to tell the user?
Debounce delay, restart, and serialization. All watcher-runner programs of any worth do these.
All of those issues can be solved by doing an import of the changed file into the build system's content addressed store, and creating a new version of the entire input tree. You also don't need to choose between cancelling, waiting, or dropping. You can do 2 builds simultaneously, and anything consuming results can show the user the first one until a more recent one is available. If the builds are at all similar, then the similar components can be deduplicated at runtime.
These techniques are used in a build system that I work on[0]. Although it does not do automatic rebuilds like Poltergeist.
[0] https://github.com/wantbuild/want
> All of those issues can be solved
I've yet to see any build system solve these.
> by doing an import of the changed file into the build system's content addressed store, and creating a new version of the entire input tree.
That's going to be unusably slow and heavyweight for automatic rebuilds on a large repo. Maybe if you optimize it for a specific COW filesystem implementation that overlays things cleverly, it'd be able to scale. Or if your build tree avoids large directories and all your build tools handle symlinks fine, then you could symlink most things that don't change quickly. But I absolutely do not see this working on a large repo with the everyday filesystems people use. Not for a generic build system that allows arbitrary commands, anyway.
> You also don't need to choose between cancelling, waiting, or dropping. You can do 2 builds simultaneously
Do you have infinite CPU and RAM and money and time or something? Or are you just compiling Hello World programs? In my universe with limited resources this would not work... at all.
> These techniques are used in a build system that I work on[0].
And how exactly do you scale it the way you're describing with automatic rebuilds?
> Although it does not do automatic rebuilds like Poltergeist.
...ah.