Over a decade ago, the maintainer of SQLite gave a talk at OSCON about their testing practices. One concept that stood out to me was the power of checklists, the same tool pilots rely on before every flight.
He also mentioned Doctors Without Borders, who weren't seeing the outcomes they expected when it came to saving lives. One surprising reason? The medical teams often didn't speak the same language or even know each other's names.
The solution was simple: a pre-surgery checklist. Before any procedure, team members would state their name and role. This small ritual dramatically improved their success rates, not through better technique, but through better communication.
I've always found an enormous amount of good practices (not just engineering ones) in aircraft operations and engineering that would be applicable to software engineering.
I've always day dreamed of an IT organization that combined those with the decision-making procedures and leadership of modern armies, such as the US Army one.
I've re-read multiple times FM22-100 which I find strikingly modern and inspiring:
While I do understand that business leadership cannot compare to the high standards of those required by way more important stakes, I think there's many lessons to learn there too.
In some areas, I absolutely agree... I think when it comes to vehicles, medical devices and heavy equipment, it would be better to see much more rigorous practices in terms of software craftsmanship. I think it should be very similar in financial operations (it isn't) and most govt work in general (it isn't).
In the end, for most scenarios, break fast, fix fast is likely a more practical and cost effective approach.
That is really insightful regarding the ritual improving outcomes through better communication - something I see reflected in many meetings I turn up to now which involve an introduction round between participants, and anecdotally improves participation in the meeting.
It would be amazing if someone had a link to a page with the MSF story, as that is a great reference to have! My google-fu hasn’t helped me in this case.
The stupid answer is that not everything that can be automated should be.
The real answer is of a more philosophical nature, if you manually had to check A, B, C... Z, then you will have a better understanding of the state of the system you work with . If something goes wrong, at least the bits you checked can be disregarded and free you to check other factors. What if your systems correctly report a faulty issue, yet your automatic checklist doesn't catch it?
Also, this manual checklist checks the operator.
You should be automating everything you can, but much care should be put into figuring out if you can actually automate a particular thing.
Automate away the process to deploy a new version of hn, what's the worst that can happen?
But don't automate the pre flight checklist, if something goes wrong while the plane is in the air, people are going to die.
I think a less verbose version of the above is that a human can detect a fault in a sensor, while a sensor can't detect it is faulty itself.
I'm not a pilot, but my brother is, and I watched him a bunch of times go through these before takeoff and landing. I think it's about more than automation, these days the aircraft computer "walks" the pilots through the checklists but it's still their responsibility to verify each item. I think it's an interesting approach to automation, keeping humans in the loop and actually promoting responsibility and accountability, as in "who checked off on these?"
Always makes me a bit envious as well as awestruck. What a joy it must be in a lot of ways to be able to grind and perfect a piece of software like this. Truly a work of craftsmanship.
You can literally just do this. I’ve never gotten fired from a software engineering job for moving slower and building things that work well, work predictably, and are built to last.
Over a career of working at it, you get dramatically better at higher levels of quality even in earlier passes, so the same level of added effort provides increasing levels of reward as you gain experience.
Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
> I’ve never gotten fired from a software engineering job for moving slower and building things that work well, work predictably, and are built to last.
In most companies, that’s not how it plays out. Once something works, you’re immediately moved on to the next task. If you’ve had the time and space to refine, polish, and carefully craft your code, you’ve been fortunate.
The person who signals that some task works and is finished is you. You have way more control over this than you are giving yourself credit for.
If you spend your career acquiescing to every request to “just ship it” then, yes, slowing down a second to do a quality pass will seem impossible. But you really can just do it.
> The person who signals that some task works and is finished is you.
That's not how it works in most big companies. You can't take arbitrarily long to finish a project. Before the project is greenlit you have to give an estimate for how long the project will take. If your estimate is too big or seems unreasonable the project dies then and there (or given to someone else). Once the project starts you're held to the estimate, and if you're taking noticeably longer than your estimate you better have a good explanation.
Nobody is saying take three years to complete a week-long task. If you could do a task in one hour, estimate and take two. If you could do it in two days, estimate and take a third day to complete it. Or better yet, estimate three days and take two and a half.
I have never seen a software development shop where estimates were treated as anything other than loose, best guesses. Very infrequently are there actually ever genuinely immutable, hard deadlines. If you are working somewhere where that's repeatedly the case—and those deadlines are regularly unrealistically tight—failure is virtually inevitable no matter what you do. So sure, fine, if you're on a death march my suggestions won't work. But in that kind of environment nothing will.
> Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
This should be true, but it's not in my experience. Even small, clear improvements are rejected as off-mission or shunted to a backlog to be forgotten. Like, "cool, but let's hold off on merging this until we can be certain it's safe", or in other words "this is more work for me and I'd really rather not".
Do it as part of ticket work you're already doing. There is always a way to leave things better than how you found them.
I have worked across a wide gamut of roles (full-stack eng, infosec, deploy infra, devops, infra eng, sysadmin), companies (big and small, startups and huge multibillion-dollar players), and industries (finance, datacenters, security products, gaming, logistics, manufacturing, AI) over a thirty year career and I have never felt the level of helplessness that people seem to be claiming. Some places have been easier, some have been harder, but I have never once found it to be as difficult or impossible as everyone laments is the case.
My point isn’t that my experience is universal, but that I find it statistically unlikely that this is nearly as hard as people are making it out to be at the majority of SWE roles.
If you find yourself repeatedly working at places where the only option is to crank out garbage at breakneck pace, I don’t know what to tell you. If you stipulate as an axiom that it's impossible to write quality software at ${JOB}, then you're right, by definition there's nothing to be done. I just don't find that a particularly helpful mindset.
You can definitely go overboard for work. If you want to do it as a hobby, go nuts, but there isn't a point in overengineering far beyond what is needed (recall the Juicero)
Overengineering is building a bridge that will stand 1000 years when 100 will do; it's excess rigor for marginal benefit. Juicero wasn't overengineering, it was building a crappy bridge to nowhere with a bunch of gaudy bells and whistles to try and hide its uselessness and poor design, that collapsed with the first people to walk over it
The pendulum has swung so far in the direction opposite of going overboard it’s almost laughable. Everyone retells the same twenty year old horror stories of architecture astronauts, but over a nearly thirty-year career I have seen precisely zero projects that failed due to engineers over-engineering, over-architecting, and over-refactoring.
I have however seen dozens of projects where productivity grinds to a halt due to the ever-increasing effort of even minor changes due to a culture of repeatedly shipping the first thing that vaguely seems to work.
The entire zeitgeist of software development these days is “move fast and break things”.
Same. But it took learning to ignore everything every manager was telling me: Go faster, ship before I'm ready to ship, proceed without agreed-on and documented requirements, listen to product instead of customers, follow code conventions set by others...
I love sqlite, it's a great piece of software. The website is full of useful information, rather than the slick marketing we are used to, even on open source projects.
With that said, I find it strange how the official website seems to be making its way through the HN front page piecemeal.
This one is probably popping today because of the simonw post yesterday about using an LLM to basically one-shot port a lib across languages with the help of an extremely robust test suite
If you wait here long enough, it happens again, and again, and again, and again...to the point you start wanting to skewer it. :)
EDIT: Haskell was early 2010s Zig, and Zig is in the traditional ~quarter-long downcycle, after the last buzzkill review post re: all the basic stuff it's missing, ex. a working language server protocol implementation. I predict it'll be back in February. I need to make a list of this sort of link, just for fun.
I was pleasantly surprised recently when planning to "upgrade" a light web app to be portable between SQLite and DuckDB, and the LLM I was working with really made the case that SQLite is better if concurrent operations were to occur.
No less impressive than the SQLite project itself; especially 100% branch coverage! That's really hard to pull off and especially to maintain as the development continues.
I am surprised to see that there isn't a lot of information about performance regression testing.
Correctness testing is important but the way SQLLite is used, potential performance drops in specific code paths or specific type of queries could be really bad for apps that use it in critical paths.
While I’ve worked in HFT and understand the sentiment, I can’t recall any open-source project I’ve used coming out with performance guarantees. Most use license language setting no guarantee or warranty. Are there notable projects that do include this consideration as their core mission?
I believe every sensible open-source developer strives to keep their software performant. To me, a performance regression is a bug like any other and I got and fix it. Sure, there's no warranty guaranteed in the license, yet no-one who takes their project even a little seriously takes it as "I can break this any way I want".
This looks so very cool, and so then all the more thought provoking that the tests themselves are closed-source, unlike the rest of the codebase. In this evolving world of rapidly improving llm coding agent productivity, the idea that the tests are more important than the implementation starts to ring true.
I was thinking about sqlite's test landscape as described here, in relation to simonw's recent writing about porting/recreating the justHTML engine from python to js via codex, nearly "automatically" with just a prompt and light steering.
It's made a lot of sense in general if you think about the business models around open source products. An extensive test suite gives you the ability to engineer changes effectively and efficiently, meaning that you can also add additional value on top of released versions better than everyone else.
Based on the stability track record, I was more curious about how SQLite has done the anomaly testing. Sadly, the article has just a few words about it.
Truly one of the best software products! It is used on every single device, and it is just pure rock-solid.
Something better than CVS was needed. (I'm not being critical of CVS. I had to use the VCSes that can before, and CVS was amazing compared to them.) Monochrome gave me the idea of doing a distributed VCS and storing content in SQLite, but Monochrome didn't support sync over HTTP, which I definitely wanted. Git had just appeared, and was really bad back in those early years. (It still isn't great, IMO, though people who have never used anything other than Git are quick to dispute that claim.) Mercurial was... Mercurial. So I decided to write my own DVCS.
This turned out to be a good thing, though not in the way I expected. Since Fossil is built on top of SQLite, Fossil became a test platform for SQLite. Furthermore, when I work on Fossil, I see SQLite from the point of view of an application developer using SQLite, rather than in my usual role of a developer of SQLite. That change in perspective has helps me to make SQLite better. Being the primary developer of the DVCS for SQLite in addition to SQLite itself also give me the freedom to adapt the DVCS to the specific needs of the SQLite project, which I have done on many occasions. People make fun of me for writing my own DVCS for SQLite, but in the balance it was a good move.
Note that Fossil is like Git in that it stores check-ins an a directed acyclic graph (DAG), though the details of each node are different. The key difference is that Fossil stores the DAG in a relational database (SQLite) whereas Git uses a custom "packfile" key/value store. Since the content is in a relational database, it is really easy to add features like tickets, and wiki, and a forum, and chat - you've got an RDBMS sitting there, so why not use it? Even without those bonus features, you also have the benefit of being about to query the DAG using SQL to get useful information that is difficult to obtain from Git. "Detached heads" are not possible in Fossil, for example. Tags are not limited by filesystem filename restrictions. You can tag multiple check-ins with the same tag (ex: all releases are tagged "release".) If you reference an older check-in in the check-in comment of a newer check-in, then go back and look at the older check-in (perhaps you bisected there), it will give a forward reference to the newer one. And so forth.
About Fossil, I really liked how everything is integrated into the VCS.
My friends also make fun of me having some tools that only I use. Somehow understanding a tool down to the last little bit of detail is satisfying by itself. We live in an era of software bloat that does not make too much sense.
Anyways, thanks for SQLite, I use it for teaching SQL for students and for my custom small scale monitoring system.
SQLite & Fossil* were created by same person (once a member of Tcl Core Team). Fossil few years after SQLite (was on CVS before). A rationale is given in: https://sqlite.org/whynotgit.html. The one other big project using it is Tcl/Tk. (Can say Tcl x SQLite x Fossil form a trinity of sorts with one using the others.)
Perhaps someone in the know can answer this: How reliable is SQLite at retaining data integrity and avoiding data corruption, compared to say, flat text files?
Over a decade ago, the maintainer of SQLite gave a talk at OSCON about their testing practices. One concept that stood out to me was the power of checklists, the same tool pilots rely on before every flight.
He also mentioned Doctors Without Borders, who weren't seeing the outcomes they expected when it came to saving lives. One surprising reason? The medical teams often didn't speak the same language or even know each other's names.
The solution was simple: a pre-surgery checklist. Before any procedure, team members would state their name and role. This small ritual dramatically improved their success rates, not through better technique, but through better communication.
https://sqlite.org/src/ext/checklist/3070700/index
I've always found an enormous amount of good practices (not just engineering ones) in aircraft operations and engineering that would be applicable to software engineering.
I've always day dreamed of an IT organization that combined those with the decision-making procedures and leadership of modern armies, such as the US Army one.
I've re-read multiple times FM22-100 which I find strikingly modern and inspiring:
https://armyoe.com/wp-content/uploads/2018/03/1990-fm-22-100...
While I do understand that business leadership cannot compare to the high standards of those required by way more important stakes, I think there's many lessons to learn there too.
In some areas, I absolutely agree... I think when it comes to vehicles, medical devices and heavy equipment, it would be better to see much more rigorous practices in terms of software craftsmanship. I think it should be very similar in financial operations (it isn't) and most govt work in general (it isn't).
In the end, for most scenarios, break fast, fix fast is likely a more practical and cost effective approach.
That is really insightful regarding the ritual improving outcomes through better communication - something I see reflected in many meetings I turn up to now which involve an introduction round between participants, and anecdotally improves participation in the meeting.
It would be amazing if someone had a link to a page with the MSF story, as that is a great reference to have! My google-fu hasn’t helped me in this case.
For prosterity, the original report on the pilot program with the checklist including introducing names of participants (doi 10.1056/NEJMsa0810119).
Possibly popularised by Atul Gawande “The Checklist Manifesto”.
Meta-comment: LLMs continue to impress me in the capabilities with unearthing information from imprecise inputs/queries.
A lot of these seem to be potentially automated - why aren’t they? Anyone know?
The stupid answer is that not everything that can be automated should be.
The real answer is of a more philosophical nature, if you manually had to check A, B, C... Z, then you will have a better understanding of the state of the system you work with . If something goes wrong, at least the bits you checked can be disregarded and free you to check other factors. What if your systems correctly report a faulty issue, yet your automatic checklist doesn't catch it?
Also, this manual checklist checks the operator.
You should be automating everything you can, but much care should be put into figuring out if you can actually automate a particular thing.
Automate away the process to deploy a new version of hn, what's the worst that can happen?
But don't automate the pre flight checklist, if something goes wrong while the plane is in the air, people are going to die.
I think a less verbose version of the above is that a human can detect a fault in a sensor, while a sensor can't detect it is faulty itself.
I'm not a pilot, but my brother is, and I watched him a bunch of times go through these before takeoff and landing. I think it's about more than automation, these days the aircraft computer "walks" the pilots through the checklists but it's still their responsibility to verify each item. I think it's an interesting approach to automation, keeping humans in the loop and actually promoting responsibility and accountability, as in "who checked off on these?"
Related. Others?
How SQLite Is Tested - https://news.ycombinator.com/item?id=38963383 - Jan 2024 (1 comment)
How SQLite Is Tested - https://news.ycombinator.com/item?id=29460240 - Dec 2021 (47 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=11936435 - June 2016 (57 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=9095836 - Feb 2015 (17 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=6815321 - Nov 2013 (37 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=4799878 - Nov 2012 (6 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=4616548 - Oct 2012 (40 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=633151 - May 2009 (28 comments)
(Reposts are fine after a year or so; links to past threads are just to satisfy extra-curious readers)
Always makes me a bit envious as well as awestruck. What a joy it must be in a lot of ways to be able to grind and perfect a piece of software like this. Truly a work of craftsmanship.
You can literally just do this. I’ve never gotten fired from a software engineering job for moving slower and building things that work well, work predictably, and are built to last.
Over a career of working at it, you get dramatically better at higher levels of quality even in earlier passes, so the same level of added effort provides increasing levels of reward as you gain experience.
Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
> I’ve never gotten fired from a software engineering job for moving slower and building things that work well, work predictably, and are built to last.
In most companies, that’s not how it plays out. Once something works, you’re immediately moved on to the next task. If you’ve had the time and space to refine, polish, and carefully craft your code, you’ve been fortunate.
The person who signals that some task works and is finished is you. You have way more control over this than you are giving yourself credit for.
If you spend your career acquiescing to every request to “just ship it” then, yes, slowing down a second to do a quality pass will seem impossible. But you really can just do it.
> The person who signals that some task works and is finished is you.
That's not how it works in most big companies. You can't take arbitrarily long to finish a project. Before the project is greenlit you have to give an estimate for how long the project will take. If your estimate is too big or seems unreasonable the project dies then and there (or given to someone else). Once the project starts you're held to the estimate, and if you're taking noticeably longer than your estimate you better have a good explanation.
Nobody is saying take three years to complete a week-long task. If you could do a task in one hour, estimate and take two. If you could do it in two days, estimate and take a third day to complete it. Or better yet, estimate three days and take two and a half.
I have never seen a software development shop where estimates were treated as anything other than loose, best guesses. Very infrequently are there actually ever genuinely immutable, hard deadlines. If you are working somewhere where that's repeatedly the case—and those deadlines are regularly unrealistically tight—failure is virtually inevitable no matter what you do. So sure, fine, if you're on a death march my suggestions won't work. But in that kind of environment nothing will.
> Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
This should be true, but it's not in my experience. Even small, clear improvements are rejected as off-mission or shunted to a backlog to be forgotten. Like, "cool, but let's hold off on merging this until we can be certain it's safe", or in other words "this is more work for me and I'd really rather not".
Do it as part of ticket work you're already doing. There is always a way to leave things better than how you found them.
I have worked across a wide gamut of roles (full-stack eng, infosec, deploy infra, devops, infra eng, sysadmin), companies (big and small, startups and huge multibillion-dollar players), and industries (finance, datacenters, security products, gaming, logistics, manufacturing, AI) over a thirty year career and I have never felt the level of helplessness that people seem to be claiming. Some places have been easier, some have been harder, but I have never once found it to be as difficult or impossible as everyone laments is the case.
That's really great and I'm happy for you but your experience is not universal.
My point isn’t that my experience is universal, but that I find it statistically unlikely that this is nearly as hard as people are making it out to be at the majority of SWE roles.
If you find yourself repeatedly working at places where the only option is to crank out garbage at breakneck pace, I don’t know what to tell you. If you stipulate as an axiom that it's impossible to write quality software at ${JOB}, then you're right, by definition there's nothing to be done. I just don't find that a particularly helpful mindset.
You can definitely go overboard for work. If you want to do it as a hobby, go nuts, but there isn't a point in overengineering far beyond what is needed (recall the Juicero)
Overengineering is building a bridge that will stand 1000 years when 100 will do; it's excess rigor for marginal benefit. Juicero wasn't overengineering, it was building a crappy bridge to nowhere with a bunch of gaudy bells and whistles to try and hide its uselessness and poor design, that collapsed with the first people to walk over it
The pendulum has swung so far in the direction opposite of going overboard it’s almost laughable. Everyone retells the same twenty year old horror stories of architecture astronauts, but over a nearly thirty-year career I have seen precisely zero projects that failed due to engineers over-engineering, over-architecting, and over-refactoring.
I have however seen dozens of projects where productivity grinds to a halt due to the ever-increasing effort of even minor changes due to a culture of repeatedly shipping the first thing that vaguely seems to work.
The entire zeitgeist of software development these days is “move fast and break things”.
Same. But it took learning to ignore everything every manager was telling me: Go faster, ship before I'm ready to ship, proceed without agreed-on and documented requirements, listen to product instead of customers, follow code conventions set by others...
I love sqlite, it's a great piece of software. The website is full of useful information, rather than the slick marketing we are used to, even on open source projects.
With that said, I find it strange how the official website seems to be making its way through the HN front page piecemeal.
This one is probably popping today because of the simonw post yesterday about using an LLM to basically one-shot port a lib across languages with the help of an extremely robust test suite
If you wait here long enough, it happens again, and again, and again, and again...to the point you start wanting to skewer it. :)
EDIT: Haskell was early 2010s Zig, and Zig is in the traditional ~quarter-long downcycle, after the last buzzkill review post re: all the basic stuff it's missing, ex. a working language server protocol implementation. I predict it'll be back in February. I need to make a list of this sort of link, just for fun.
I was pleasantly surprised recently when planning to "upgrade" a light web app to be portable between SQLite and DuckDB, and the LLM I was working with really made the case that SQLite is better if concurrent operations were to occur.
No less impressive than the SQLite project itself; especially 100% branch coverage! That's really hard to pull off and especially to maintain as the development continues.
They need to do better testing to stop the whole database file getting corrupted, which happened a ton to me with SQLite.
I am surprised to see that there isn't a lot of information about performance regression testing.
Correctness testing is important but the way SQLLite is used, potential performance drops in specific code paths or specific type of queries could be really bad for apps that use it in critical paths.
While I’ve worked in HFT and understand the sentiment, I can’t recall any open-source project I’ve used coming out with performance guarantees. Most use license language setting no guarantee or warranty. Are there notable projects that do include this consideration as their core mission?
I believe every sensible open-source developer strives to keep their software performant. To me, a performance regression is a bug like any other and I got and fix it. Sure, there's no warranty guaranteed in the license, yet no-one who takes their project even a little seriously takes it as "I can break this any way I want".
This looks so very cool, and so then all the more thought provoking that the tests themselves are closed-source, unlike the rest of the codebase. In this evolving world of rapidly improving llm coding agent productivity, the idea that the tests are more important than the implementation starts to ring true.
I was thinking about sqlite's test landscape as described here, in relation to simonw's recent writing about porting/recreating the justHTML engine from python to js via codex, nearly "automatically" with just a prompt and light steering.
It's made a lot of sense in general if you think about the business models around open source products. An extensive test suite gives you the ability to engineer changes effectively and efficiently, meaning that you can also add additional value on top of released versions better than everyone else.
SQLite's HowSQLiteIsTested reads like a bible of testing. I've know few who score merely highly by comparison
Based on the stability track record, I was more curious about how SQLite has done the anomaly testing. Sadly, the article has just a few words about it.
Truly one of the best software products! It is used on every single device, and it is just pure rock-solid.
Considering that the support tier where you get access to the testing suite is 150K/year, I don't think they will be spilling any beans soon.
Interesting, TH3 is proprietary.
What is the story with Fossil? Is it used outside of Sqlite?
The story of Fossil:
Something better than CVS was needed. (I'm not being critical of CVS. I had to use the VCSes that can before, and CVS was amazing compared to them.) Monochrome gave me the idea of doing a distributed VCS and storing content in SQLite, but Monochrome didn't support sync over HTTP, which I definitely wanted. Git had just appeared, and was really bad back in those early years. (It still isn't great, IMO, though people who have never used anything other than Git are quick to dispute that claim.) Mercurial was... Mercurial. So I decided to write my own DVCS.
This turned out to be a good thing, though not in the way I expected. Since Fossil is built on top of SQLite, Fossil became a test platform for SQLite. Furthermore, when I work on Fossil, I see SQLite from the point of view of an application developer using SQLite, rather than in my usual role of a developer of SQLite. That change in perspective has helps me to make SQLite better. Being the primary developer of the DVCS for SQLite in addition to SQLite itself also give me the freedom to adapt the DVCS to the specific needs of the SQLite project, which I have done on many occasions. People make fun of me for writing my own DVCS for SQLite, but in the balance it was a good move.
Note that Fossil is like Git in that it stores check-ins an a directed acyclic graph (DAG), though the details of each node are different. The key difference is that Fossil stores the DAG in a relational database (SQLite) whereas Git uses a custom "packfile" key/value store. Since the content is in a relational database, it is really easy to add features like tickets, and wiki, and a forum, and chat - you've got an RDBMS sitting there, so why not use it? Even without those bonus features, you also have the benefit of being about to query the DAG using SQL to get useful information that is difficult to obtain from Git. "Detached heads" are not possible in Fossil, for example. Tags are not limited by filesystem filename restrictions. You can tag multiple check-ins with the same tag (ex: all releases are tagged "release".) If you reference an older check-in in the check-in comment of a newer check-in, then go back and look at the older check-in (perhaps you bisected there), it will give a forward reference to the newer one. And so forth.
This is wild, thank you for answering!
About Fossil, I really liked how everything is integrated into the VCS.
My friends also make fun of me having some tools that only I use. Somehow understanding a tool down to the last little bit of detail is satisfying by itself. We live in an era of software bloat that does not make too much sense.
Anyways, thanks for SQLite, I use it for teaching SQL for students and for my custom small scale monitoring system.
SQLite & Fossil* were created by same person (once a member of Tcl Core Team). Fossil few years after SQLite (was on CVS before). A rationale is given in: https://sqlite.org/whynotgit.html. The one other big project using it is Tcl/Tk. (Can say Tcl x SQLite x Fossil form a trinity of sorts with one using the others.)
*The homepage is available in: https://fossil-scm.org/home/doc/trunk/www/index.wiki.
> Is it used outside of Sqlite?
Not really. It's one of the early _distributed_ version control systems, released a little after git but before git gained widespread acceptance.
It has a built-in (optional) web UI, which is cool, and uses SQLite to store its state/history.
[0] https://en.wikipedia.org/wiki/Fossil_(software)
I can't answer that but it's a great thing that an entire Fossil repo lives in a single sqlite file.
Perhaps someone in the know can answer this: How reliable is SQLite at retaining data integrity and avoiding data corruption, compared to say, flat text files?
... very thoroughly is the answer
What a superb piece of software SQLite is.
Install and forget.