I have thought about this, but also wondered if it would be as magic without the highly paid team of fantastic SRE and maintainers, and the ridiculous amount of disk and compute available to them.
I imagine it would be as magic as blaze vs bazel out in the wild. That is, you need still someone(s) to do a ton of hard work to make it work right but when it does you do get the magic.
You’re absolutely right - SrcFS and EdenFS were inspirations for SourceFS.
The challenge with those systems is that they’re tightly coupled with the tools, infrastructure, and even developer distros used internally at Google and Meta, which makes them hard to generalize. SourceFS aims to bring that “Piper-like” experience to teams outside Google - but in a way that works with plain Git, Repo, and standard Linux environments.
Also, if I’m not mistaken, neither SrcFS nor EdenFS directly accelerate builds - most of that speed comes from the build systems themselves (Blaze/Buck). SourceFS goes a step further by neatly and simply integrating with the build system and caching/replay pretty much any build step.
The Android example we’ve shown is just one application - it’s a domain we know well and one where the pain is obvious - but we built SourceFS in a way where we can easily integrate with a new build system and speed up other big codebases.
Also you’re spot on that this problem mostly affects big organizations with complex codebases. Here without the infrastructure and SRE support the magic does not work (e.g. think the Redis CVE 10.0 of last week or the AWS downtime of this week) - and hence the “talk to us”.
We plan to gradually share more interesting details about how SourceFS works. If there’s something specific you’d like us to cover - let us know - and help us crowd source our blogpost pipeline :-).
It's a shame that AI is ruining certain phrases, the "You’re absolutely right" was appropriate but I've been trained reading so many AI responses to roll my eyes at that.
WDYM this seems very familiar. At commit deadbeef I don't need to materialize the full tree to build some subcomponent of the monorepo. Did I miss something?
And as for pricing... are there really that many people working on O(billion) lines of code that can't afford $TalkToUs? I'd reckon that Linux is the biggest source of hobbyist commits and that checks out on my laptop OK (though I'll admit I don't really do much beyond ./configure && make there...)
Oh yea, this is "srcfs the idea" but not "srcfs the project".
I.e. this isn't something battel tested for hundreds of thousands of developers 24/7 over the last years.
But a simple commercial product sold by people that liked what they used.
Well, since android is their flagship example, anyone that wants to build custom android releases for some reason.
With the way things are, you don't need billions of code of your own code to maybe benefit from tools that handle billions of lines of code.
Well they also claim to be able to cache build steps somehow build-system independently.
> As the build runs, any step that exactly matches a prior record is skipped and the results are automatically reused
> SourceFS delivers the performance gains of modern build systems like Bazel or Buck2 – while also accelerating checkouts – all without requiring any migration.
Yeah, I agree. This part is hand waved away without any technical description of how they manage to pull this off since knowing what is even a build step and what dependencies and outputs are are only possible at the process level (to disambiguate multi threaded builds). And then there’s build steps that have side effects which come up a lot with CMake+ninja.
So they could in principle get a full list of dependencies of each build step. Though I'm not sure how they would skip those steps without having an interposer in the build system to shortcut it.
Yeah that’s what I meant. I bet you the build must be invoked through a wrapper script that interposes all executables launched within the product tree. Complicated but I think it could work. Skipping steps correctly is the hard part but maybe you do that in terms of knowing somehow the files that will be accessed ahead of time by that processes and then skipping the launch and materializing the output (they also mention they have to run it once in a sandbox to detect the dependencies). But still, side effects in build systems seem difficult to account for correctly; I bet you that’s why it’s a “contact us” kind of product - there’s work needed to make sure it actually works on your project.
But initially the article sounded like it was describing a mix of tup and Microsoft's git vfs (https://github.com/microsoft/VFSForGit) mushed together. But doing that by itself is probably a pile of work already.
Yes, you are correct - SourceFS also caches and replays build steps in a generic way.
It works surprisingly well, to the point where it’s hard to believe until you actually see it in action (here is a short demo video, but it probably isn't the best way to showcase it: https://youtu.be/NwBGY9ZhuWc?t=76 ).
We intentionally kept the blog post light on implementation details - partly to make it accessible to a broader audience, and partly because we will be posting gradually some more details. Sounds like build caching/replay is high on the desired blogpost list - ack :-).
The build-system integration used here was a one-line change in the Android build tree. That said, you’re right - deeper integration with the build system could push the numbers even further, and that’s something we’re actively exploring.
I think I got the magic part. You can store all build system binaries in the VFS itself. When any binary gets executed, VFS can return a small sham binary instead that just checks command line arguments, if they match, checks the inputs, and if they match, applies the previous output. If there is any mismatch, it can execute the original binary as usual and make the new output. Easy and no process hacking necessary.
Hey everyone. I’m Serban, co-founder of Source.dev. Thanks for the upvotes and thoughtful discussion. I’ll reply to as many comments as I can. Nothing means more to an early-stage team than seeing we’re building something people truly value - thanks from all of us at Source.dev!
It’s definitely not ccache as they cover that under compiler wrapper. This works for Android because a good chunk of the tree is probably dead code for a single build (device drivers and whatnot). It’s unclear how they benchmark - they probably include checkout time of the codebase which artificially inflates the cost of the build (you only checkout once). It’s a virtual filesystem like what Facebook has open sourced although they claim to also do build caching without needing a dedicated build system that is aware of this and that part feels very novel
Re: including checkout, it’s extremely unlikely. source: worked on Android for 7 years, 2 hr build time tracks to build time after checkout on 128 core AMD machine; checkout was O(hour), leaving only an hour for build if that was the case.
Obviously this is the best-case, hyper-optimized scenario and we were careful not to inflate the numbers.
The machine running SourceFS was a c4d-standard-16, and if I remember correctly, the results were very similar on an equivalent 8-vCPU setup.
As mentioned in the blog post, the results were 51 seconds for a full Android 16 checkout (repo init + repo sync) and ~15 minutes for a clean build (make) of the same codebase. Note that this run was mostly replay - over 99 % of the build steps were served from cache.
Please fill in this form: https://www.source.dev/demo . We’re prioritizing cloud deployments but are keen to hear about your use case and see what we can do.
Meh, content marketing for a commercial biz. There are no interesting technical details here.
I was a build engineer in a previous life. Not for Android apps, but some of the low-effort, high-value tricks I used involved:
* Do your building in a tmpfs if you have the spare RAM and your build (or parts of it) can fit there.
* Don't copy around large files if you can use symlinks, hardlinks, or reflinks instead.
* If you don't care about crash resiliency during the build phase (and you normally should not, each build should be done in a brand-new pristine reproducible environment that can be thrown away), save useless I/O via libeatmydata and similar tools.
* Cross-compilers are much faster than emulation for a native compiler, but there is a greater chance of missing some crucial piece of configuration and silently ending up with a broken artifact. Choose wisely.
The high-value high-effort parts are ruthlessly optimizing your build system and caching intermediate build artifacts that rarely change.
We hear you on the “we want more technical blogs” part - they’ll be coming once we get a breather. We kept this first post high-level to reach a broader audience. Thanks for reading!
The world desperately needs a good open source VFS that supports Windows, macOS, and Linux. Waaaaay too many companies have independently reinvented this wheel. Someone just needs to do it once, open source it, and then we can all move on.
This. Such a product also solves some AI problems by matting you version very large amounts of training data in a VCS like git, which can then be farmed out for distributed unit testing.
HuggingFace bought XetHub which is really cool. It’s built for massive blobs of weight data. So it’s not a general purpose VCS VFS. The world still needs the latter.
I’d be pretty happy if Git died and it was replacing with a full Sapling implementation. Git is awful so that’d be great. Sigh.
Tldr: your build system is so f'd that you have gigs of unused source and hundreds of repeated executions of the same build step. They can fix that. Or, you could, I dunno, fix your build?
You could just have a mono-repo with a large amount of assets that aren't always relevant to pull.
Incremental builds and diff only pulls are not enough in a modern workflow. You either need to keep a fleet of warm builders or you need to store and sync the previous build state to fresh machines.
Games and I'm sure many other types of apps fall into this category of long builds, large assets, and lots of intermediate build files. You don't even need multiple apps in a repo to hit this problem. There's no simple off the shelf solution.
While it looks like at least some of the team are ex-googlers, this isn't the srcfs we know from piper (Google internal tools).
Looks like it's similar in some ways. But they also don't tell too much and even the self-hosting variant is "Talk to us" pricing :/
Google or Meta needs to open source their magic VFSes. Maybe Meta is closest with EdenFS.
Doesn't perforce have a VFS that works on Windows?
I think it was made by Microsoft; https://github.com/microsoft/p4vfs
I have thought about this, but also wondered if it would be as magic without the highly paid team of fantastic SRE and maintainers, and the ridiculous amount of disk and compute available to them.
I imagine it would be as magic as blaze vs bazel out in the wild. That is, you need still someone(s) to do a ton of hard work to make it work right but when it does you do get the magic.
You’re absolutely right - SrcFS and EdenFS were inspirations for SourceFS.
The challenge with those systems is that they’re tightly coupled with the tools, infrastructure, and even developer distros used internally at Google and Meta, which makes them hard to generalize. SourceFS aims to bring that “Piper-like” experience to teams outside Google - but in a way that works with plain Git, Repo, and standard Linux environments.
Also, if I’m not mistaken, neither SrcFS nor EdenFS directly accelerate builds - most of that speed comes from the build systems themselves (Blaze/Buck). SourceFS goes a step further by neatly and simply integrating with the build system and caching/replay pretty much any build step.
The Android example we’ve shown is just one application - it’s a domain we know well and one where the pain is obvious - but we built SourceFS in a way where we can easily integrate with a new build system and speed up other big codebases.
Also you’re spot on that this problem mostly affects big organizations with complex codebases. Here without the infrastructure and SRE support the magic does not work (e.g. think the Redis CVE 10.0 of last week or the AWS downtime of this week) - and hence the “talk to us”.
We plan to gradually share more interesting details about how SourceFS works. If there’s something specific you’d like us to cover - let us know - and help us crowd source our blogpost pipeline :-).
It's a shame that AI is ruining certain phrases, the "You’re absolutely right" was appropriate but I've been trained reading so many AI responses to roll my eyes at that.
That “something specific” would be to invent some magic so you can get the advantages of the system without having an entire team to back it up!
Thank you for the thoughtful response!
WDYM this seems very familiar. At commit deadbeef I don't need to materialize the full tree to build some subcomponent of the monorepo. Did I miss something?
And as for pricing... are there really that many people working on O(billion) lines of code that can't afford $TalkToUs? I'd reckon that Linux is the biggest source of hobbyist commits and that checks out on my laptop OK (though I'll admit I don't really do much beyond ./configure && make there...)
Oh yea, this is "srcfs the idea" but not "srcfs the project".
I.e. this isn't something battel tested for hundreds of thousands of developers 24/7 over the last years. But a simple commercial product sold by people that liked what they used.
Well, since android is their flagship example, anyone that wants to build custom android releases for some reason. With the way things are, you don't need billions of code of your own code to maybe benefit from tools that handle billions of lines of code.
The headline times are a bit ridiculous. Are they trying to turn https://github.com/facebook/sapling/blob/main/eden/fs/docs/O... or some git fuse thing into a product?
Well they also claim to be able to cache build steps somehow build-system independently.
> As the build runs, any step that exactly matches a prior record is skipped and the results are automatically reused
> SourceFS delivers the performance gains of modern build systems like Bazel or Buck2 – while also accelerating checkouts – all without requiring any migration.
Which sounds way too good to be true.
Yeah, I agree. This part is hand waved away without any technical description of how they manage to pull this off since knowing what is even a build step and what dependencies and outputs are are only possible at the process level (to disambiguate multi threaded builds). And then there’s build steps that have side effects which come up a lot with CMake+ninja.
A fuse filesystem can get information about the thread performing the file access: https://man.openbsd.org/fuse_get_context.3
So they could in principle get a full list of dependencies of each build step. Though I'm not sure how they would skip those steps without having an interposer in the build system to shortcut it.
Yeah that’s what I meant. I bet you the build must be invoked through a wrapper script that interposes all executables launched within the product tree. Complicated but I think it could work. Skipping steps correctly is the hard part but maybe you do that in terms of knowing somehow the files that will be accessed ahead of time by that processes and then skipping the launch and materializing the output (they also mention they have to run it once in a sandbox to detect the dependencies). But still, side effects in build systems seem difficult to account for correctly; I bet you that’s why it’s a “contact us” kind of product - there’s work needed to make sure it actually works on your project.
Didn't tup do something like that? https://gittup.org/tup/index.html Haven't looked at it in a while, no idea if it got adoption.
But initially the article sounded like it was describing a mix of tup and Microsoft's git vfs (https://github.com/microsoft/VFSForGit) mushed together. But doing that by itself is probably a pile of work already.
Yes, you are correct - SourceFS also caches and replays build steps in a generic way. It works surprisingly well, to the point where it’s hard to believe until you actually see it in action (here is a short demo video, but it probably isn't the best way to showcase it: https://youtu.be/NwBGY9ZhuWc?t=76 ).
We intentionally kept the blog post light on implementation details - partly to make it accessible to a broader audience, and partly because we will be posting gradually some more details. Sounds like build caching/replay is high on the desired blogpost list - ack :-).
The build-system integration used here was a one-line change in the Android build tree. That said, you’re right - deeper integration with the build system could push the numbers even further, and that’s something we’re actively exploring.
You could manage this with a deterministic vm, cf antithesis.
Seems viable if you can wrap each build stap with a start/stop signal.
At the start snapshot the filesystem. Record all files read & written during the step.
Then when this step runs again with the same inputs you can apply the diff from last time.
Some magic to automatically hook into processes and doing this automatically seems possible.
I think I got the magic part. You can store all build system binaries in the VFS itself. When any binary gets executed, VFS can return a small sham binary instead that just checks command line arguments, if they match, checks the inputs, and if they match, applies the previous output. If there is any mismatch, it can execute the original binary as usual and make the new output. Easy and no process hacking necessary.
It seems like that plus some build output caching?
Hey everyone. I’m Serban, co-founder of Source.dev. Thanks for the upvotes and thoughtful discussion. I’ll reply to as many comments as I can. Nothing means more to an early-stage team than seeing we’re building something people truly value - thanks from all of us at Source.dev!
Once builds are "fast enough," there's no business case for the painful work of making the codebase comprehensible.
We're going to 1 billion LoC codebases and there's nothing stopping us!
Their performance claims are quite a bit ahead of the distributed android build systems that I've used, I'm curious what the secret sauce is.
Is it going to be anything more than just a fancier ccache?
It’s definitely not ccache as they cover that under compiler wrapper. This works for Android because a good chunk of the tree is probably dead code for a single build (device drivers and whatnot). It’s unclear how they benchmark - they probably include checkout time of the codebase which artificially inflates the cost of the build (you only checkout once). It’s a virtual filesystem like what Facebook has open sourced although they claim to also do build caching without needing a dedicated build system that is aware of this and that part feels very novel
Re: including checkout, it’s extremely unlikely. source: worked on Android for 7 years, 2 hr build time tracks to build time after checkout on 128 core AMD machine; checkout was O(hour), leaving only an hour for build if that was the case.
Obviously this is the best-case, hyper-optimized scenario and we were careful not to inflate the numbers.
The machine running SourceFS was a c4d-standard-16, and if I remember correctly, the results were very similar on an equivalent 8-vCPU setup.
As mentioned in the blog post, the results were 51 seconds for a full Android 16 checkout (repo init + repo sync) and ~15 minutes for a clean build (make) of the same codebase. Note that this run was mostly replay - over 99 % of the build steps were served from cache.
I want this but self hosted/integrated into our CI (Gitlab in our case).
Please fill in this form: https://www.source.dev/demo . We’re prioritizing cloud deployments but are keen to hear about your use case and see what we can do.
Meh, content marketing for a commercial biz. There are no interesting technical details here.
I was a build engineer in a previous life. Not for Android apps, but some of the low-effort, high-value tricks I used involved:
* Do your building in a tmpfs if you have the spare RAM and your build (or parts of it) can fit there.
* Don't copy around large files if you can use symlinks, hardlinks, or reflinks instead.
* If you don't care about crash resiliency during the build phase (and you normally should not, each build should be done in a brand-new pristine reproducible environment that can be thrown away), save useless I/O via libeatmydata and similar tools.
* Cross-compilers are much faster than emulation for a native compiler, but there is a greater chance of missing some crucial piece of configuration and silently ending up with a broken artifact. Choose wisely.
The high-value high-effort parts are ruthlessly optimizing your build system and caching intermediate build artifacts that rarely change.
That’s all basic stuff, and none of it solves what this product claims to.
We hear you on the “we want more technical blogs” part - they’ll be coming once we get a breather. We kept this first post high-level to reach a broader audience. Thanks for reading!
It sounds from the page that it is Android-source-code specific. Why? Could this work with any source code base?
If my understanding is correct, this only makes sense for codebases that do not fit in memory of a largest build box an organisation can run
I think the page itself answers your question pretty well.
I posted a longer answer to a similar question above, if you're interested. Thanks!
Why tf does an electric vehicle need 500m+ lines of code
Some people actually write tests.
We actually picked a fairly conservative number - there are even larger automotive codebases today.
For example, Mercedes’ MB.OS: “is powered by more than 650 million lines of code” - see: https://www.linkedin.com/pulse/behind-scenes-mbos-developmen...
The world desperately needs a good open source VFS that supports Windows, macOS, and Linux. Waaaaay too many companies have independently reinvented this wheel. Someone just needs to do it once, open source it, and then we can all move on.
This. Such a product also solves some AI problems by matting you version very large amounts of training data in a VCS like git, which can then be farmed out for distributed unit testing.
HuggingFace bought XetHub which is really cool. It’s built for massive blobs of weight data. So it’s not a general purpose VCS VFS. The world still needs the latter.
I’d be pretty happy if Git died and it was replacing with a full Sapling implementation. Git is awful so that’d be great. Sigh.
Could you just do the build in /dev/shm?
No. `/dev/shm` would just be a build in `tmpfs`.
Though from what I gather form the story, part of the spedup comes from how android composes their build stages.
I.e. speeding up by not downloading everything only helps if you don't need everything you download. And adds up when you download multiple times.
I'm not sure they can actually provide a speedup in a tight developer cycle with a local git checkout and a good build system.
Tldr: your build system is so f'd that you have gigs of unused source and hundreds of repeated executions of the same build step. They can fix that. Or, you could, I dunno, fix your build?
You could just have a mono-repo with a large amount of assets that aren't always relevant to pull.
Incremental builds and diff only pulls are not enough in a modern workflow. You either need to keep a fleet of warm builders or you need to store and sync the previous build state to fresh machines.
Games and I'm sure many other types of apps fall into this category of long builds, large assets, and lots of intermediate build files. You don't even need multiple apps in a repo to hit this problem. There's no simple off the shelf solution.