Why Is SQLite Coded In C

(sqlite.org)

294 points | by plainOldText 20 hours ago ago

306 comments

  • jasonthorsness 17 hours ago

    “None of the safe programming languages existed for the first 10 years of SQLite's existence. SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code.”

    Modern languages might do more than C to prevent programmers from writing buggy code, but if you already have bug-free code due to massive time, attention, and testing, and the rate of change is low (or zero), it doesn’t really matter what the language is. SQLIte could be assembly language for all it would matter.

    • 1vuio0pswjnm7 27 minutes ago

      This begs the question of why Rust evangelists keep targeting existing projects instead of focusing writing new, better software. In theory these languages should allow software developers to write programs that they would not, or could not, attempt using languages without automatic memory management

      Instead what I see _mostly_ is re-writes and proposed re-writes of existing software, often software that has no networking functions, and/or relatively small, easily audited software that IMHO poses little risk of memory-related bugs

      This is concerning to me as an end-user who builds their software from source because the effects on compilation, e.g., increased resource requirements, increased interdependencies, increased program size, decreased compilation speed, are significant

      • MangoToupe 7 minutes ago

        > This begs the question of why Rust evangelists keep targeting existing projects instead of focusing writing new, better software.

        Designing new software is orders of magnitude more difficult than iterating on existing software

    • oconnor663 17 hours ago

      > and the rate of change is low (or zero)

      This jives with a point that the Google Security Blog made last year: "The [memory safety] problem is overwhelmingly with new code...Code matures and gets safer with time."

      https://security.googleblog.com/2024/09/eliminating-memory-s...

      • miohtama 11 hours ago

        You can find historical SQLite CVEs here

        https://www.sqlite.org/cves.html

        Note that although code matures the chances of C Human error bugs will never go to zero. We have some bad incidents like Heartbleed to show this.

        • hnlmorg 10 hours ago

          Heartbleed was a great demonstration of critical systems that were under appreciated.

          Too few maintainers, too few security researchers and too little funding.

          When writing systems as complicated and as sensitive as the leading encryption suite used globally, no language choice will save you from under resourcing.

        • ziotom78 11 hours ago

          Right, but I believe nobody can claim that Human error bugs go to zero for Rust code.

          • john_the_writer 9 hours ago

            Agreed. I rather dislike the idea of "safe" coding languages. Fighting with a memory leak in an elixir app, for the past week. I never viewed c or c++ as unsafe. Writing code is hard, always has been, always will be. It is never safe.

            • simonask 9 hours ago

              This is a bit of a misunderstanding.

              Safe code is just code that cannot have Undefined Behavior. C and C++ have the concept of "soundness" just like Rust, just no way to statically guard against it.

              • gcr 5 hours ago

                Modern compilers like clang and GCC both have static analysis for some of this. Check out the undefined behavior sanitizer.

                • simonask 2 hours ago

                  As the other person pointed out, these are two different things. Sanitizers add runtime checks (with zero consideration for performance - don’t use these in productions). Static analysis runs at compile time, and while both GCC and Clang are doing amazing jobs of it, it’s still very easy to run into trouble. The mostly catch the low-hanging fruit.

                  The technical reason is that Rust-the-language gives the compiler much more information to work with, and it doesn’t look like it is possible to add this information to the C or C++ languages.

                • humanrebar 4 hours ago

                  Sanitizers are technically dynamic analysis. They instrument built programs and analyze them as they run.

            • jen20 4 hours ago

              A memory leak is not a memory safety issue.

    • weinzierl 9 hours ago

      "SQLite could be recoded in Go or Rust, but doing so would probably introduce far more bugs than would be fixed, and it may also result in slower code."

      We will see. On the Rust side there is Turso which is pretty active.

      https://turso.tech/

      • Sammi 7 hours ago

        The Sqlite team in stuck in the classic dilemma. They are stuck with their existing thing because it is so big, that you can't just stop the world for you existing users and redo it. Meanwhile some small innovator comes along and builds the next thing because why not, they don't have anything holding them back. This is classic Innovator's Dilemma and Creative Destruction. This has of course not happened yet and we have to wait and see if Turso can actually deliver, but the Turso team is extremely talented and their git repo history is on fire, so it is definitely a possible scenario.

        • saalweachter 3 hours ago

          You say that like SQLite is a for profit company competing for market share or one of several rival projects in a corporation trying to not be canceled.

          It's an open sourced project! It's public domain!

          If you make an open source project that is heavily used and widely lauded for a quarter century before being supplanted by a newer solution that's better, do you know what that is?

          A success! You did it! You made a thing that was useful and loved!

          Nothing lasts forever; there's nothing wrong with a project filling a niche and then gracefully fading away when that niche goes away.

          • Sammi 3 hours ago

            Funny you should say so because I actually made an effort _not_ to use corporate business terms, because open source project definitely do compete with each other both for developer and user attention and for general prestige, which in turn may be leveraged to get access to funding and other development resources in various ways. And in an even funnier turn of events Sqlite development is actually funded by a for-profit company that sells Sqlite support contracts: https://www.sqlite.org/consortium.html

    • ChrisRR 7 hours ago

      This is the argument against re-writing the linux base utils in rust. When they've had such widespread use for decades, a hell of a lot of bugs have been ironed out

      • bombcar 6 hours ago

        Especially since memory bugs are only a subset of all bugs, and (perhaps) not even the most important subset.

        Memory bugs are often implicated in security issues, but other bugs are more likely to cause data loss, corruption, etc.

    • devjab 7 hours ago

      I quite like that Zig works a drop in for C in a few use cases. It's been very nice to utilize it along with our Python and regular C binaries. We attempted to move into Go because we really like the philosophy and opinions it force upon it's developers, but similar to interpreted languages it can be rather hard to optimize it. I'm sure people more talented than us would have an easy time with it, but they don't work for us. So it was easier to just go with Python and a few parts of it handled by C (and in even fewer cases Zig).

      I guess we could use Rust and I might be wrong on this, but it seemed like it would be a lot of work to utilize it compared to just continuing with C and gradually incorprating Zig, and we certainly don't write bug free C.

      • Hendrikto 6 hours ago

        > We attempted to move into Go […], but similar to interpreted languages it can be rather hard to optimize it. […] So it was easier to just go with Python

        I don’t get that. You had trouble optimizing Go, so you went with Python?

    • nabhasablue 11 hours ago

      there is already an sqlite port in Go :) https://gitlab.com/cznic/sqlite

      • ncruces 8 hours ago

        That's not a port. That's an extremely impressive machine translation of C to Go.

        The output is a non-portable half-a-million LoC Go file for each platform.

        • cratermoon 2 hours ago

          also unmaintainable and full of unsafe

    • ahoka 7 hours ago

      Sure they did exist, almost no one cared though.

    • pizza234 8 hours ago

      > you already have bug-free code due to massive time, attention, and testing, and the rate of change is low (or zero), it doesn’t really matter what the language is. SQLIte could be assembly language for all it would matter.

      This is the C/++ delusion - "if one puts enough effort, a [complex] memory unsafe program can be made memory safe"; the year after this page was published, the Magellan series of RCEs was released.

      Keeping SQLite in C is certainly a valid design choice, but it's important to be aware of the practical implications of the language.

    • pjmlp 7 hours ago

      The author cleverly leaves out all the safer alternatives that have existed outside UNIX, and what was happening with computers outside Bell Labs during the 1970's.

      Not only was Apple was able to launch the Mac Classic with zero lines of C code, their Pascal dialect lives on in Delphi and Free Pascal.

      As one example.

      • wat10000 5 hours ago

        Isn’t Pascal just as problematic as C in this respect? And the original Mac was mostly (all?) assembly, not Pascal. They couldn’t have used a higher level language anyway. The system was just too small for that. SQLite wouldn’t fit on it.

        • pjmlp 5 hours ago

          Not at all, because Pascal is more strongly typed, and has features that C is still yet to acquire in type safety.

          Non exaustive list:

          - proper strings with bounds checking

          - proper arrays with bounds checking

          - no pointer decays, you need to be explicit about getting pointers to arrays

          - less use cases of implicit conversions, requires more typecasts

          - reference parameters reduce the need of pointers

          - variant records directly support tags

          - enumerations are stronger typed without implicit conversions

          - modules with better control what gets exposed

          - range types

          - set types

          - arenas

          There was plenty of Pascal code on Mac OS including a Smalltalk like OOP framework, until C++ took over Object Pascal's role at Apple, which again it isn't C.

          I love the "but it has used Assembly!" usual rebutal, as if OSes written in C weren't full of Assembly, or inline Assembly and language extensions that certainly aren't C (ISO C proper) either.

          If you prefer, Zig is a modern taken on what Modula-2 in 1978, and Object Pascal in the 1980's already offered, with the addition of nullable types and comptime as key differentor in 40 years, packaged in a more appealing syntax for current generations.

          • wat10000 4 hours ago

            My point is that the original Mac used little to no Pascal. The assembly isn’t a “rebuttal,” it’s just what was actually used.

            • pjmlp 4 hours ago

              Sure if the only thing that matters is what happened in 1990, and nothing else that came afterwards.

              Also if we ignore the historical tellings from Apple employees at the time, in places like the Folklore book and CHM interviews.

              • wat10000 3 hours ago

                What is with people on HN using a specific example and then getting annoyed when I respond to it? You specifically said Apple launched the original Mac without C. Which is true, but the implication that it used Pascal is not. I'm not addressing what happened years later.

                Can you elaborate on these historical tellings? From what I found on folklore.org, Lisa OS had a bunch of Pascal, and the Mac system borrowed a bunch of Lisa code, but it was all hand-translated to assembly in the process.

  • saalweachter 17 hours ago

    I think beyond the historical reasons why C was the best choice when SQLite was being developed, or the advantages it has today, there's also just no reason to rewrite SQLite in another language.

    We don't have to have one implementation of a lightweight SQL database. You can go out right now and start your own implementation in Rust or C++ or Go or Lisp or whatever you like! You can even make compatible APIs for it so that it can be a drop-in replacement for SQLite! No one can stop you! You don't need permission!

    But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?

    • jacquesm 11 hours ago

      > But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?

      Because a lot of language advocacy has degraded to telling others what you want them to do instead of showing by example what to do. The idea behind this is that language adoption is some kind of zero sum game. If you're developing project 'x' in language 'y' then you are by definition not developing it in language 'z'. This reduces the stature of language 'z' and the continued existence of project 'x' in spite of not being written in language 'z' makes people wonder if language 'z' is actually as much of a necessity as its proponents claim. And never mind the fact that if the decision in what language 'x' would be written were to be revisited by the authors of 'x' that not only language 'z' would be on the menu, but also languages 'd', 'l', 'j' and 'g'.

      • waterTanuki 10 hours ago

        Given the common retort for why not try X project in Y new language is "it's barely used in other things. Let's wait and see it get industry adoption before trying it out" it's hard to see it as anything OTHER than a zero-sum game. As much as I like Rust I recognize some things like SQLite are better off in C. But the reason you see so much push for some new languages is because if they don't get and maintain regualr adoption, they will die off.

        • jacquesm 10 hours ago

          Plenty of programming languages gained mass adoption without such tactics.

        • john_the_writer 9 hours ago

          Yeah.. I always remind myself of the netscape browser. A lesson in "if it's working to mess with it" My question is always the reverse. Why try it in Y new language. Is there some feature that Y provides that was missing in X? How often do those features come up.

          Company I worked for decided to build out a new microservice in language Y. The whole company was writing in W and X, but they decided to write the new service in Y. When something goes wrong, or a bug needs fixing, 3 people in the company of over 100 devs know Y. Guess what management is doing.. Re-writing it in X.

    • biohazard2 7 hours ago

      > there's also just no reason to rewrite SQLite in another language. […] But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?

      The SQLite developers are actually open to the idea of rewriting SQLite in Rust, so they must see an advantage to it:

      > All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include: […] If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.

      • yomismoaqui 5 hours ago

        My theory is they wrote this just to get the ‘rewrite everything in Rust’ crowd off their backs.

        • rirze 5 hours ago

          I think it’s the opposite. They want to atleast explore rewriting in Rust but are afraid of backlash. Hence why they’re open to private discussion. I can imagine they are split internally.

    • glandium 11 hours ago

      And, in fact, these implementations exist. At least in Rust, there's rqlite and turso.

      • otoolep 4 hours ago

        rqlite[1] author here. Just to be clear, rqlite is not SQLite but rewritten in Go. rqlite uses the vanilla C code, and calls it from Go[2]. I consider that an important advantage over other approaches -- rqlite gets all the benefits of rock-solid[3] SQLite. As result there are no questions about the quality of the database engine.

        [1] https://rqlite.io

        [2] https://rqlite.io/docs/design/

        [3] https://www.sqlite.org/testing.html

    • etruong42 an hour ago

      To build excitement in a project and potentially release new versions with new features, all more safely than adding C code.

    • AdamJacobMuller 15 hours ago

      One good reason is that people have written golang adapters, so that you can use sqlite databases without cgo.

      I agree to what I think you're saying which is that "sqlite" has, to some degree, become so ubiquitous that it's evolved beyond a single implementation.

      We, of course, have sqlite the C library but there is also sqlite the database file format and there is no reason we can't have an sqlite implementation in golang (we already do) and one in pure rust too.

      I imagine that in the future that will happen (pure rust implementation) and that perhaps at some point much further in the future, that may even become the dominant implementation.

      • zimpenfish 7 hours ago

        > One good reason is that people have written golang adapters, so that you can use sqlite databases without cgo.

        There's also the Go-wrapped WASM build of the C sqlite[0] which is handy.

        [0] https://github.com/ncruces/go-sqlite3

    • turtletontine 16 hours ago

      Thanks for this, I fully agree. One frustration I have with the modern moment is the tendency to view anything more than five years old with disdain, as utterly irrelevant and obsolete. Maybe I’m just going old, but I like my technology dependable and boring, especially software. Glad to see someone express respect for the decades of expertise that have gone into things we take for granted.

    • eusto 8 hours ago

      I think that if SQLite would suddenly have to add a bunch of new features, the discussion about rewriting it would be very relevant.

      I think we like to fool ourselves that decisions like these are based on performance considerations or maintainability or whatever, but in reality they would be based on time to market and skill availability in the areas where the team is being built.

      At the end of the day, SQLite is not being rewritten because the cost of doing so is not justifiable

      • RhysU 35 minutes ago

        These guys are, after all, running a business. If they thought the best thing for their business was a rewrite, they'd do it.

  • bfkwlfkjf 17 hours ago

    > Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.

    Huh it's not everyday that I hear a genuinely new argument. Thanks for sharing.

    • Aurornis 16 hours ago

      I guess I don’t find that argument very compelling. If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.

      This feels like chasing arbitrary 100% test coverage at the expense of safety. The code quality isn’t actually improved by omitting the checks even though it makes testing coverage go up.

      • nimih 14 hours ago

        > If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.

        I don't think I would (personally) ever be comfortable asserting that a code branch in the machine instructions emitted by a compiler can't ever be taken, no matter what, with 100% confidence, during a large fraction of situations in realistic application or library development, as to do so would require a type system powerful enough to express such an invariant, and in that case, surely the compiler would not emit the branch code in the first place.

        One exception might be the presence of some external formal verification scheme which certifies that the branch code can't ever be executed, which is presumably what the article authors are gesturing towards in item D on their list of preconditions.

        • timv 12 hours ago

          The argument here is that they're confident that the bounds check isn't needed, and would prefer the compiler not insert one.

          The choices therefore are:

          1. No bound check

          2. Bounds check inserted, but that branch isn't covered by tests

          3. Bounds check inserted, and that branch is covered by tests

          I'm skeptical of the claim that if (3) is infeasible then the next best option is (1)

          Because if it is indeed an impossible scenario, then the lack of coverage shouldn't matter. If it's not an impossible scenario then you have an untested case with option (1) - you've overrun the bounds of an array, which may not be a branch in the code but is definitely a different behaviour than the one you tested.

          • skywhopper 6 hours ago

            I think you’re misreading their statement. They aren’t saying they don’t want the compiler to insert the additional code. They’re saying they want to test all code the compiler generates.

      • estebank 16 hours ago

        In safety critical spaces you need to be able to trace any piece of a binary back to code back to requirements. If a piece of running code is implicit in code, it makes that traceability back to requirements harder. But I'd be surprised if things like bounds checks are really a problem for that kind of analysis.

        • Aurornis 12 hours ago

          I don’t see the issue. The operations which produce a bounds check are traceable back to the code which indexes into something.

        • 0xWTF 11 hours ago

          What tools do you use for this? PlantUML?

        • refulgentis 16 hours ago

          Yeah sounds too clever by half, memory safe languages are less safe because they have bounds checks...maybe I could see it on a space shuttle? Well, only in the most CYA scenarios, I'd imagine.

          • evil-olive 14 hours ago

            > maybe I could see it on a space shuttle?

            "Airbus confirms that SQLite is being used in the flight software for the A350 XWB family of aircraft."

            https://www.sqlite.org/famous.html

          • sgarland 16 hours ago

            Bear in mind that SQLite is used in embedded systems, and I absolutely wouldn’t be surprised to learn it’s in space.

          • manwe150 16 hours ago

            Critical applications like that used to use ADA to get much more sophisticated checking than just bounds. No certified engineer would (should) ever design a safety critical system without multiple “unreachable” fail safe mechanisms

            Next they’ll have to tell me about how they had to turn off inlining because it creates copies of code which adds some dead branches. Bounds checks are just normal inlined code. Any bounds checked language worth its salt has that coverage for all that stuff already.

          • skywhopper 6 hours ago

            SQLite is used in a lot of hypercritical application areas. I’d almost be surprised if it’s not part of some if not all modern spaceflight stacks.

      • throw0101d 6 hours ago

        > If you’re convinced the code branch can’t ever be taken, you also should be confident that it doesn’t need to be tested.

        “What gets us into trouble is not what we don't know. It's what we know for sure that just ain't so.” — Mark Twain, https://www.goodreads.com/quotes/738123

      • jacquesm 11 hours ago

        I'm confused about the claim though. These branches are not at the source level, and test coverage usually is measured at the source level.

      • Deanoumean 15 hours ago

        You didn't understand the argument. The testing is what instills the confidence.

      • Ekaros 11 hours ago

        If a code branch can't be ever taken. Doesn't that mean you do not need it? Basically it must be code that will not get executed. So leaving it out does not matter.

        If you then can come up a scenario where you need it. Well in fully tested code you do need to test it.

      • dimitrios1 11 hours ago

        There is a whole 'nother level of safety validation that goes beyond your everyday OWASP, or heck even what we consider "highly regulated" industry requirements that 95-99% of us devs care about. SQLite is used in some highly specialized, highly sensitive environments, where they are concerned about bit flips, and corrupted memory. I had the luxury of sitting through Richard Hipp's talk about it one time, but I am certainly butchering it.

    • jonahx 17 hours ago

      So is the argument that safe langs produce stuff like:

          // pseudocode
          if (i >= array_length) panic("index out of bounds")
      
      that are never actually run if the code is correct? But (if I understand correctly) these are checks implicitly added by the compiler. So the objection amounts to questioning the correctness of this auto-generated code, and is predicated upon mistrusting the correctness of the compiler? But presumably the Rust compiler itself would have thorough tests that these kinds of checks work?

      Someone please correct me if I'm misunderstanding the argument.

      • btilly 16 hours ago

        One of the things that SQLite is explicitly designed to do is have predictable behavior in a lot of conditions that shouldn't happen. One of those predictable behavior is that it does its best to stay up and running, and continuing to do the best it can. Conditions where it should succeed in doing this include OOM, the possibility of corrupted data files, and (if possible) misbehaving CPUs.

        Automatic array bounds checks can get hit by corrupted data. Thereby leading to a crash of exactly the kind that SQLite tries to avoid. With complete branch testing, they can guarantee that the test suite includes every kind of corruption that might hit an array bounds check, and guarantee that none of them panic. But if the compiler is inserting branches that are supposed to be inaccessible, you can't do complete branch testing. So now how do you know that you have tested every code branch that might be reached from corrupted data?

        Furthermore those unused branches are there as footguns which are reachable with a cosmic ray bit flip, or a dodgy CPU. Which again undermines the principle of keeping running if at all possible.

        • vlovich123 14 hours ago

          In rust at least you are free to access an array via .get which returns an option and avoids the “compiler inserted branch” (which isn’t compiler inserted by the way - [] access just implicitly calls unwrap on .get and sometimes the compiler isn’t able to elide).

          Also you rarely need to actually access by index - you could just access using functional methods on .iter() which avoids the bounds check problem in the first place.

          • OptionOfT 13 hours ago

            For slices the access is handled inside of the compiler: https://github.com/rust-lang/rust/blob/235a4c083eb2a2bfe8779...

            I'm checking to see how array access is implemented, whether through deref to slice, or otherwise.

            • vlovich123 12 hours ago

              I had Vec in mind but regardless nothing forces you to use the bounds-checked variant vs one that returns option<t>. And if you really are sure the bounds hold you can always use the assume crate or just unwrap_unchecked explicitly.

        • jemmyw 14 hours ago

          Keeping running if possible doesn't sound like the best strategy for stability. If data was corrupted in memory in a was that would cause a bounds check to fail then carrying on is likely to corrupt more data. Panic, dump a log, let a supervisor program deal with the next step, or a human, but don't keep going potentially persisting corrupted data.

          • btilly 11 hours ago

            What the best strategy is depends on your use case.

            The use case that SQLite has chosen to optimize for is critical embedded software. As described in https://www.sqlite.org/qmplan.html, the standard that they base their efforts on is a certification for use in aircraft. If mission critical software on a plane is allowed to crash, this can render the controls inoperable. Which is likely to lead to a very literal crash some time later.

            The result is software that has been optimized to do the right thing if at all possible, and to degrade gracefully if that is not possible.

            Note that the open source version of SQLite is not certified for use in aviation. But there are versions out there that have been certified. (The difference is a ton of extra documentation.) And in fact SQLite is in use by Airbus. Though the details of what exactly for are not, as far as I know, public.

            If this documented behavior is not what you want for your use case, then you should consider using another database. Though, honestly, no other database comes remotely close when it comes to software quality. And therefore I doubt that "degrade as documented rather than crash" is a good reason to avoid SQLite. (There are lots of other potential reasons for choosing another database.)

            • Groxx 10 hours ago

              outside political definitions, I'm not sure "crash and restart with a supervisor" and "don't crash" are meaningfully different? they're both error-handling tactics, likely perfectly translatable to each other, and Erlang stands as an existence proof that crashing is a reasonable strategy in extremely reliable software.

              I fully recognize that political definitions drive purchases, so it's meaningful to a project either way. but that doesn't make it a valid technical argument.

              • btilly 2 hours ago

                Yes, Erlang demonstrates that "crash and restart with a supervisor" is a potentially viable strategy to reliability.

                But the choice is not just political. There are very meaningful technical differences for code that potentially winds up embedded in other software, and could be inside of literal embedded software.

                The first is memory. It takes memory to run whatever is responsible for detecting the crash, relaunching, and starting up a supervisor. This memory is not free. Which is one of the reasons why Erlang requires at a minimum 10 MB or so of memory. By contrast the overhead of SQLite is something like half a MB. This difference is very significant for people putting software into medical devices, automotive controllers, and so on. All of which are places where SQLite is found, but Erlang isn't.

                The second is concurrency. Erlang's concurrency model leaks - you can't embed it in software without having to find a way to fit Erlang concurrency in. This isn't a problem if Erlang already is in your software stack. But that's an architectural constraint that would be a problem in many of the contexts that SQLite is actually used in.

                Remember, SQLite is not optimized for your use case. It is optimized for embedded software that needs to try to keep running when things go wrong. It just happens to be so good that it is useful for you.

              • Izkata 2 hours ago

                If the cause of the crash is in any way related to the persisted data, there's a good chance you're now stuck in a crashloop.

                If it can avoid crashing, other functionality may continue to work fine.

          • hoppp 14 hours ago

            It still needs to detect that there is corrupted data, dump the log and the supervisor would not be the best if it was external since in some runtimes it could be missing, they just build it into it and we came full circle.

      • NobodyNada 16 hours ago

        > But (if I understand correctly) these are checks implicitly added by the compiler.

        This is a dubious statement. In Rust, the array indexing operator arr[i] is syntactic sugar for calling the function arr.index(i), and the implementation of this function on the standard library's array types is documented to perform a bounds-check assertion and access the element.

        So the checks aren't really implicitly added -- you explicitly called a function that performs a bounds check. If you want different behavior, you can call a different, slightly-less-ergonomic indexing function, such as `get` (which returns an Option, making your code responsible for handling the failure case) or `get_unchecked` (which requires an unsafe block and exhibits UB if the index is out of bounds, like C).

        • nubbler 14 hours ago

          Another commenter in this thread used the phrase "complex abomination" which seems more and more apt the more I learn about Rust.

          • J_Shelby_J an hour ago

            Nothing in this world is perfect, but this behavior is less of an abomination than whatever a junior dev on a timeline might write to handle this condition.

      • binary132 16 hours ago

        I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior. If you haven’t tested a given control flow, the issue is that it’s possible that the end result is some indeterminate or invalid state for the whole program, not that the given bounds check doesn’t panic the way it’s supposed to. On embedded for example (which is an important usecase for SQLite) this could result in orphaned or broken resources.

        • jonahx 16 hours ago

          > I think it’s less like doubting that the given panic works and more like an extremely thorough proof that all possible branches of the control flow have acceptable behavior.

          The way I was thinking about it was: if you somehow magically knew that nothing added by the compiler could ever cause a problem, it would be redundant to test those branches. Then wondering why a really well tested compiler wouldn't be equivalent to that. It sounds like the answer is, for the level of soundness sqlite is aspiring to, you can't make those assumptions.

        • thayne 16 hours ago

          But does it matter if that control flow is unreachable?

          If the check never fails, it is logically equivalent to not having the check. If the code isn't "correct" and the panic is reached, then the equivalent c code would have undefined behavior, which can be much worse than a panic.

          • nubbler 14 hours ago

            In the first case, if it is actually unreachable, I would never want that code ending up in my binary at all. It must be optimised out.

            Your second case implies that it is reachable.

            • thayne an hour ago

              In the first case, it often is optimized out. But the optimizer isn't perfect, and can't detect every case where it is unreachable.

              If you have the second case, I would much rather have a panic than undefined behavior. As mentioned in another comment, in c indexing an array is semantically equivalent to:

                  if (i < len(arr)) arr[i] else UB()
              
              In fact a c compiler could put in a check and abort if it is out of bounds, like rust does and still be in spec. But the undefined behavior could also cause memory corruption, or cause some other subtle bug.
      • sixthDot 4 hours ago

        Bound checks are usually conditionally compiled. That's more a kind of "contract" you'll verify during testing. In the end the software actually used will not check anything.

            #ifdef CONTRACTS
            if (i >= array_length) panic("index out of bounds")
            #endif
      • oconnor663 17 hours ago

        > questioning the correctness of this auto-generated code

        I wouldn't put it that way. Usually when we say the compiler is "incorrect", we mean that it's generating code that breaks the observable behavior of some program. In that sense, adding extra checks that can't actually fail isn't a correctness issue; it's just an efficiency issue. I'd usually say the compiler is being "conservative" or "defensive". However, the "100% branch testing" strategy that we're talking about makes this more complicated, because this branch-that's-never-taken actually is observable, not to the program itself but to its test suite.

      • dathinab 15 hours ago

        no it's a (accidental) red Hering argument

        sure safety checks are added but

        it's ignoring that many of such checks get reliably optimized away

        worse it's a bit like saying "in case of a broken invariant I prefer arbitrary potential highly problematic behavior over clean aborts (or errors) because my test tooling is inadequate"

        instead of saying "we haven't found adequate test tooling" for our use case

        Why inadequate? Because technically test setups can use

        1. fault injection to test such branches even if normally you would never hit them

        2. for many of such tests (especially array bound checks) you can pretty reliably identify them and then remove them from your test coverage statistic

        idk. what the tooling of rust wrt this is in 2025, but around the rust 1.0 times you mainly had C tooling you applied to rust so you had problems like that back then.

      • lionkor 17 hours ago

        It's not like that, the compiler explicitly doesn't do compile-time checks here and offloads those to the runtime.

        Rust does not stop you from writing code that accesses out of bounds, at all. It just makes sure that there's an if that checks.

        • selcuka 16 hours ago

          Ok, but you can still test all the branches in your source code and have 100% coverage. Those additional `if` branches are added by the compiler. You are responsible for testing the code you write, not the one that actually runs. Your compiler's test suite is responsible for the rest.

          By the same logic one could also claim that tail recursion optimisation, or loop unrolling are also dangerous because they change the way code works, and your tests don't cover the final output.

          • binary132 16 hours ago

            If they produce control flow _in the executable binary_ that is untested, then they could conceivably lead to broken states. I don’t believe most of those sorts of transformations cause alternative control flows to be added to the executable binary.

            I don’t think anyone would find the idea compelling that “you are only responsible for the code you write, not the code that actually runs” if the code that actually runs causes unexpected invalid behavior on millions of mobile devices.

          • tialaramex 5 hours ago

            I believe there's a Rust RFC for a way to write mandatory tail calls with the become keyword. So then the code is actually defined to have a tail call, if it can't have a tail call it won't compile, if it can have one then that's what you get.

            Some languages I was aware of are defined so that if what you wrote could be a tail call it is. However you might write code you thought was a tail call and you were wrong - in such languages it only blows up when it recurses too deep and runs out of stack. AIUI the Rust feature would reject this code.

          • foul 16 hours ago

            >You are responsible for testing the code you write, not the one that actually runs.

            Hipp worked as a military contractor for battleships, furthermore years later SQLite was under contract under every proto-smartphone company in the USA. Under these constraints you maybe are not responsible to test what the compiler spits out across platforms and different compilers, but doing that makes the project a lot more reliable, makes it sexier for embedded and weapons.

          • unclad5968 16 hours ago

            I don't see anything wrong with taking responsibility for the code that actually runs. I would argue it's that level of accountability has played a part in Sqlite being such a great project.

          • estebank 16 hours ago

            > You are responsible for testing the code you write, not the one that actually runs.

            This is not correct for every industry.

    • anitil 17 hours ago

      It's the sort of argument that I wouldn't accept from most people and most projects, but from Dr Hipp isn't most people and Sqlite isn't most projects.

      • cogman10 16 hours ago

        It's a bad argument.

        Certainly don't get me wrong, SQLite is one of the best and most thoroughly tested libraries out there. But this was an argument to have 4 arguments. That's because 2 of the arguments break down as "Those languages didn't exist when we first wrote SQLite and we aren't going to rewrite the whole library just because a new language came around."

        Any language, including C, will emit or not emit instructions that are "invisible" to the author. For example, whenever the C compiler decides it can autovectorize a section of a function it'll be introducing a complicated set of SIMD instructions and new invisible branch tests. That can also happen if the C compiler decides to unroll a loop for whatever reason.

        The entire point of compilers and their optimizations is to emit instructions which keep the semantic intent of higher level code. That includes excluding branches, adding new branches, or creating complex lookup tables if the compiler believes it'll make things faster.

        Dr Hipp is completely correct in rejecting Rust for SQLite. Sqlite is already written and extremely well tested. Switching over to a new language now would almost certainly introduce new bugs that don't currently exist as it'd inevitably need to be changed to remain "safe".

        • Ferret7446 14 hours ago

          > Any language, including C, will emit or not emit instructions that are "invisible" to the author

          Presumably this is why they do 100% test coverage. All of those instructions would be tested and not invisible to the test suite

          • cogman10 6 hours ago

            How could they know? Any changes to the compiler will potentially generate new code.

            A new compiler, new flags, a new version. These all can create new invisible untested branches.

            • joshkel 4 hours ago

              The way you know is by running the full SQLite test suite, with 100% MC/DC coverage (slightly stricter than 100% branch coverage), on each new compiler, version, and set of flags you intend to support. It's my understanding that this is the approach taken by the SQLite team.

              Dr. Hipp's position is paraphrased as, “I cannot trust the compilers, so I test the binaries; the source code may have UBs or run into compiler bugs, but I know the binaries I distribute are correct because they were thoroughly tested" at https://blog.regehr.org/archives/1292. There, Dr. John Regehr, a researcher in undefined behavior, found some undefined behavior in the SQLite source code, which kicked off a discussion of the implications of UB given 100% MC/DC coverage of the binaries of every supported platform.

              (I suppose the argument at this point is, "Users may use a new compiler, flag, or version that creates untested code, but that's not nearly as bad as _all_ releases and platforms containing untested code.")

        • ynik 5 hours ago

          Autovectorization / unrolling can maybe still be handed with a couple of additional tests. The main problem I see with doing branch coverage on compiled machine code is inlining: instead of two tests for one branch, you now need two tests for each function that a copy of the branch was inlined into.

        • manwe150 15 hours ago

          If it was as completely tested as claimed, then switching to rust would be trivial. All you need to do is pass the test suite and all bugs would be gone. I can think of other reasons not to jump to rust (it is a lot of code, sqlite already works well, and test coverage is very good but also incomplete, and rust only solves a few correctness problems)—just not because of claiming sqlite is already tested enough to be bug free of the kinds of issues that rust might actually prevent.

          • dathinab 15 hours ago

            > to rust would be trivial.

            no, you still need to rewrite, re-optimize, etc. everything

            it would make it much easier to be fully compatible, sure, but that doesn't make it trivial

            furthermore part of it's (mostly internal) design are strongly influenced by C specific dev-UX aspects, so you wouldn't write them the same, so test for them (instead of integration tests) may not apply

            which in general also means that you most likely would break some special purpose/usual user which do have "brittle" (not guaranteed) assumptions about SQLite

            if you have code which very little if at all changes and has no major issues, don't rewrite it

            but most of the new "external" things written around SQLite, alternative VFS impl. etc. tend to be at most partially written in C

          • jacquesm 11 hours ago

            > If it was as completely tested as claimed

            It is.

            > then switching to rust would be trivial

            So prove it. Hint: it's not trivial.

    • hypeatei 17 hours ago

      Couldn't a method like `get_unchecked()` be used to avoid the bounds check[0] if you know it's safe?

      0: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.get...

      • oconnor663 16 hours ago

        Yes. You have to write `unsafe { ... }` around it, so there's an ergonomic penalty plus a more nebulous "sense that you're doing something dangerous that might get some skeptical looks in code review" penalty, but the resulting assembly will be the same as indexing in C.

        • hypeatei 16 hours ago

          I figured, but I guess I don't understand this argument then. SQLite as a project already spends a lot of time on quality so doing some `unsafe` blocks with a `// SAFETY:` comment doesn't seem unreasonable if they want to avoid the compiler inserting a panic branch for bounds checks.

          • Ferret7446 14 hours ago

            If you put unsafe around almost all of your code (array indexing) aren't you better off just writing C?

            • aw1621107 9 hours ago

              Perhaps if the only thing you're doing is array indexing? Though I'm not sure that would apply in this particular case anyways.

          • tomjakubowski 12 hours ago

            In many cases LLVM can prove the bounds check is redundant or otherwise is unnecessary and will optimize it away.

    • ChadNauseam 17 hours ago

      I wonder if this problem could be mitigated by not requiring coverage of branches that unconditionally lead to panics. or if there could be some kind of marking on those branches that indicate that they should never occur in correct code

      • accelbred 16 hours ago

        You'd want to statically prove that any panic is unreachable

    • jkafjanvnfaf 10 hours ago

      It's new because it makes no sense.

      There already is an implicit "branch" on every array access in C, it's called an access violation.

      Do they test for a segfault on every single array access in the code base? No? Then they don't really have 100% branch coverage, do they?

      • prein 2 hours ago

        Take a look at their description of how SQLite is tested: https://www.sqlite.org/testing.html

        I think a lot of projects that claim to have 100% coverage are overselling their testing, but SQLite is in another category of thoroughness entirely.

    • beached_whale 17 hours ago

      I think those branches are often not there because it's provably never going out of bounds. There are ways to ensure the compiler knows the bounds cannot be broken.

    • NobodyNada 16 hours ago

      It's interesting to consider (and the whole page is very well-reasoned), but I don't think that the argument holds up to scrutiny. If such an automatic bounds-check fails, then the program would have exhibited undefined behavior without that branch -- and UB is strictly worse than an unreachable branch that does something well-specified like aborting.

      A simple array access in C:

          arr[i] = 123;
      
      ...can be thought of as being equivalent to:

          if (i >= array_length) UB();
          else arr[i] = 123;
      
      where the "UB" function can do literally anything. From the perspective of exhaustively testing and formally verifying software, I'd rather have the safe-language equivalent:

          if (i >= array_length) panic();
          else arr[i] = 123;
      
      ...because at least I can reason about what happens if the supposedly-unreachable condition occurs.

      Dr. Hipp mentions that "Recoding SQLite in Go is unlikely since Go hates assert()", implying that SQLite makes use of assert statements to guard against unreachable conditions. Surely his testing infrastructure must have some way of exempting unreachable assert branches -- so why can't bounds checks (that do nothing but assert undefined behavior does not occur) be treated in the same way?

      • eesmith 10 hours ago

        The 100% branch testing is on the compiled binary. To exempt unreachable assert branches, turn off assertions, compile, and test.

        A more complex C program can have index range checking at a different place than the simple array access. The compiler's flow analysis isn't always able to confirm that the index is guaranteed to be checked. If it therefore adds a cautionary (and unneeded) range check, then this code branch can never be exercised, making the code no longer 100% branch tested.

    • dathinab 16 hours ago

      the problem is it's kinda an anti argument

      you basically say if deeply unexpected things happen you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error. ... that doesn't seem right

      worse it's due to a lack of the used tooling and not a fundamental problem, not only can you test this branches (using fault injection) you also often (not always) can separate them from relevant branches when collecting the branch statistics

      so the while argument misses the point (which is tooling is lacking, not extra checks for array bounds and similar)

      lastly array bounds checking is probably the worst example they could have given as it

      - often can be disabled/omitted in optimized builds

      - is quite often optimized away

      - has often quite low perf. overhead

      - bound check branches are often very easy to identify, i.e. excluding them from a 100% branch testing statistic is viable

      - out of bounds read/write are some of the most common cases of memory unsafety leading to security vulnerability (including full RCE cases)

      • sgbeal 6 hours ago

        > you prefer you program doing widely arbitrary and as such potentially dangerous things over it having a clean abort or proper error.

        SQLite isn't a program, it's a library used by many other programs. As such, aborting is not an option. It doesn't do "wildly arbitrary" things - it reports errors to the client application and takes it on faith that they will respond appropriately.

    • coolThingsFirst 16 hours ago

      This is a dumb argument, it's like saying for a perfect human being there's no need for smart pointers, garbage collection or the borrow checker.

      • ChrisRR 7 hours ago

        I can't figure out how you've come to that equivalence

    • kazinator 15 hours ago

      > In incorrect code, the branches are taken, but code without the branches just behaves unpredictably.

      It's like seat belts.

      E.g. what if we drive four blocks and then the case occurs when the seatbelt is needed need the seatbelt? Okay, we have an explicit test for that.

      But we cannot test everything. We have not tested what happens if we drive four blocks, and then take a right turn, and hit something half a block later.

      Screw it, just remove the seatbelts and not have this insane untested space whereby we are never sure whether the seat belt will work properly and prevent injury!

  • DarkNova6 17 hours ago

    > All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include:

    - Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.

    - Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.

    - Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.

    - Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.

    - Rust needs a mechanism to recover gracefully from OOM errors.

    - Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.

    • steveklabnik 17 hours ago

      1. Rust has had ten years since 1.0. It changes in backward compatible ways. For some people, they want no changes at all, so it’s important to nail down which sense is meant.

      2. This has been demonstrated.

      3. This one hinges on your definition of “obscure,” but the “without an operating system” bit is unambiguously demonstrated.

      4. I am not an expert here, but given that you’re testing binaries, I’m not sure what is Rust specific. I know the Ferrocene folks have done some of this work, but I don’t know the current state of things.

      5. Rust as a language does no allocation. This OOM behavior is the standard library, of which you’re not using in these embedded cases anyway. There, you’re free to do whatever you’d like, as it’s all just library code.

      6. This also hinges on a lot of definitions, so it could be argued either way.

      • dathinab 14 hours ago

        > 2.

        ironically if we look at how things play out in practice rust is far more suited as general purpose languages then C, to a point where I would argue C is only a general purpose language on technicality not on practical IRL basis

        this is especially ridiculous when they argue C is the fasted general purpose language when that has proven to simply not hold up to larger IRL projects (i.e. not micro benchmarks)

        C has terrible UX for generic code re-use and memory management, this often means that in IRL projects people don't write the fasted code. Wrt. memory management it's not rare to see unnecessary clones, as not doing so it to easy to lead to bugs. Wrt. data structures you write the code which is maintainable, robust and fast enough and sometimes add the 10th maximal simple reimplementation (or C macro or similar) of some data structure instead of using reusing some data structures people spend years of fine tuning.

        When people switched a lot from C to C++ most general purpose projects got faster, not slower. And even for the C++ to Rust case it's not rare that companies end up with faster projects after the switch.

        Both C++ and Rust also allow more optimization in general.

        So C is only fastest in micro benchmarks after excluding stuff like fortran for not being general purpose while itself not really being used much anymore for general purpose projects...

      • drnick1 12 hours ago

        I think Rust (and C++) are just too complicated and visually ugly, and ultimately that hurts the maintainability of the code. C is simple, universal, and arguably beautiful to look at.

        • metaltyphoon 11 hours ago

          These are all opinions.

        • simonask 8 hours ago

          C is simple. As a result, programming in C is not simple in any way.

        • krior 11 hours ago

          C is so simple, that you will need to read a 700-page, comitee-written manual befor you can attempt to write it correctly.

          • guest_reader 4 hours ago

            > C is so simple, that you will need to read a 700-page, comitee-written manual befor you can attempt to write it correctly.

            The official C99 standard document is typically about 210 pages.

      • hoppp 14 hours ago

        Rust has dependency hell and supply chain attacks like with npm.

        • jeroenhd 7 hours ago

          C has the same problem, but it lacks a common package manager like other languages do. Just because you need to clone git submodules or run package manager commands (hope you're on a supported OS version!) doesn't mean C doesn't have package manager issues.

          C projects avoiding dependencies entirely just end up reimplementing work. You can do that in any language.

        • mamcx 12 hours ago

          But is optional. For this kind of project, is logical to adopt something like the tiger battle ethos and own all the code and have no external deps (or vendor them). Even do your own std if wanna.

          Is hard work? But is not that different from what you see in certain C projects that neither use external deps

          • bsder 10 hours ago

            Tigerbeetle. Your autocorrect really mangled that one ...

        • krior 11 hours ago

          The lack of dependency hell is a bit of an illusion when it comes to C. What other languages solve via library most C projects will reimplement themselves, which of course increases the chance for bugs.

        • steveklabnik 14 hours ago

          You control the dependencies you put in Cargo.toml.

          • hoppp 13 hours ago

            What about the dependencies of your dependencies?

            I don't put too many things in Cargo.toml and it still pulls like a hundred things

            • steveklabnik 9 minutes ago

              If you care about not having any dependencies, then choosing dependencies that themselves don't have many dependencies should be going into the ones that you choose.

            • ghosty141 10 hours ago

              Then don't? In C you would just implement everything yourself, so go do that in Rust if you don't want dependencies.

              In C I've seen more half-baked json implementations than I can count on my fingers because using dependencies is too cumbersome in that ecosystem and people just write it themselves but most of the time with more bugs.

            • rendaw 8 hours ago

              Direct and transitive dependencies are locked and hashed.

            • BrouteMinou 12 hours ago

              Your system is going to be owned, but at least, it's going to be "memory safely" owned!

              P. S.

              I you don't account all the unsafe sections scattered everywhere in all those dependencies.

      • gerdesj 16 hours ago

        "1. Rust has had ten years since 1.0. ..."

        Rust insists on its own package manager "rustup" and frowns on distro maintainers. When Rust is happy to just be packaged by the distro and rustup has gone away, then it will have matured to at least adolescence.

        • steveklabnik 15 hours ago

          Rust has long worked with distro package maintainers, and as far as I know, Rust is packaged in every major Linux distribution.

          There are other worlds out there than Linux.

          • gerdesj 15 hours ago

            So why insist on rustup?

            • dathinab 14 hours ago

              different goals

              the rust version packaged in distros is for compiling rust code shipped as part of the distro. This means it

              - is normally not the newest version (which , to be clear, is not bad per see, but not necessary what you need)

              - might not have all optional components (e.g. no clippy)

              but if you idk. write a server deployed by you company

              - you likely want all components

              - you don't need to care what version the distro pinned

              - you have little reason not to use the latest rust compiler

              for other use cases you have other reasons, some need nightly rust, some want to test against beta releases, some want to be able to test against different rust versions etc. etc.

              rustup exist (today) for the same reason why a lot of dev projects use project specific copies of all kinds of tooling and libraries which do not match whatever their distro ships: The distro use-case and generic dev-use case have diverging requirements! (Other examples nvm(node), flutter, java etc.).

              Also some distros are notorious for shipping outdated software (debian "stable").

              And not everything is Linux, rustup works on OSX.

            • steveklabnik 14 hours ago

              Distributions generally package the versions of compilers that are needed to build the programs in their package manager. However, many developers want more control than that. They may want to use different versions of the compiler on different projects, or a different version than what’s packaged.

              Basically, people use it because they prefer it.

            • gspr 8 hours ago

              I'm a Debian Developer, and do some Rust both professionally and for fun. I restrict myself to using only libraries and tooling from Debian. The experience is quite OK. And I find the Rust language team to be quite friendly and sympathetic to our needs.

              Rather, what makes it hard is the culture and surrounding ecosystem of pinned versions or the latest of everything. That's probably in part the fault of Rustup being recommended, I agree. But it's not nefarious.

      • csande17 15 hours ago

        One question towards maturity: has any working version of the Rust compiler ever existed? By which I mean one that successfully upholds the memory-safety guarantees Rust is supposed to make, and does not have any "soundness holes" (which IIRC were historically used as a blank check / excuse to break backwards compatibility).

        The current version of the Rust compiler definitely doesn't -- there's known issues like https://github.com/rust-lang/rust/issues/57893 -- but maybe there's some historical version from before the features that caused those problems were introduced.

        • dathinab 14 hours ago

          has there ever been a modern optimizing C compiler free of pretty serious bugs? (it's a rhetoric question, there hasn't been any)

        • steveklabnik 15 hours ago

          Every compiler has soundness bugs. They’re just programs like any other. This isn’t exclusive to Rust.

          • csande17 15 hours ago

            In general, the way Rust blurs the line between "bugs in the compiler" and "problems with how the language is designed" seems pretty harmful and misleading. But it's also a core part of the marketing strategy, so...

            • steveklabnik 14 hours ago

              What makes you say this is a core part of the marketing strategy? I don’t think Rust’s marketing has ever focused on compiler bugs or their absence.

              • csande17 14 hours ago

                You are correct that Rust's marketing does not claim that there are no bugs in its compiler. In fact it does the opposite: it suggests that there are no problems with the language, by asserting that any observed issue in the language is actually a bug in the compiler.

                Like, in the C world, there's a difference between "the C specification has problems" and "GCC incorrectly implements the C specification". You can make statements about what "the C language" does or doesn't guarantee independently of any specific implementation.

                But "the Rust language" is not a specification. It's just a vague ideal of things the Rust team is hoping their compiler will be able to achieve. And so "the Rust language" gets marketed as e.g. having a type system that guarantees memory safety, when in fact no such type system has been designed -- the best we have is a compiler with a bunch of soundness holes. And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.

                This propagates down into things like Rust's claims about backwards compatibility. Rust is only backwards-compatible if your programs are written in the vague-ideal "Rust language". The Rust compiler, the thing that actually exists in the real world, has made a lot of backwards-incompatible changes. But these are by definition just bugfixes, because there is no such thing as a design issue in "the Rust language", and so "the Rust language" can maintain its unbroken record of backwards-compatibility.

                • aw1621107 12 hours ago

                  > And even if there's some fundamental issue with how traits work that hasn't been resolved for six years, that can get brushed off as merely a compiler bug.

                  Is it getting brushed off as merely a compiler bug? At least if I'm thinking of the same bug as you [0] the discussion there seems to be more along the lines of the devs treating it as a "proper" language issue, not a compiler bug. At least as far as I can tell there hasn't been a resolution to the design issue, let alone any work towards implementing a fix in the compiler.

                  The soundness issue that I see more frequently get "brushed off as merely a compiler bug" is the lifetime variance one underpinning cve-rs [1], which IIRC the devs have long decided what the proper behavior should be but actually implementing said behavior is blocked behind some major compiler reworks.

                  > has made a lot of backwards-incompatible changes

                  Not sure I've seen much evidence for "a lot" of compatibility breaks outside of the edition system. Perhaps I'm just particularly (un)lucky?

                  > because there is no such thing as a design issue in "the Rust language"

                  I'm not sure any of the Rust devs would agree? Have any of them made a claim along those lines?

                  [0]: https://github.com/rust-lang/rust/issues/57893

                  [1]: https://github.com/Speykious/cve-rs

                  • csande17 11 hours ago

                    > Is it getting brushed off as merely a compiler bug?

                    Yes, this thread contains an example: https://news.ycombinator.com/item?id=45587209 . (I linked the same bug you did in the comment that that's a reply to.)

                    The Rust team may see this as a language design issue internally, and I'd be inclined to agree. Rust's outward-facing marketing does not reflect this view.

                    • aw1621107 9 hours ago

                      > I linked the same bug you did in the comment that that's a reply to

                      Ah, my apologies. Not sure exactly how I managed to miss that.

                      That being said, I guess I might have read that bit of your comment different than you had in mind; I was thinking of whether the Rust devs were dismissing language design issues as compiler bugs, not what third parties (albeit one with an unusually relevant history in this case) may think.

                      > Rust's outward-facing marketing does not reflect this view.

                      As above, perhaps I interpret the phrase "outward-facing marketing" differently than you do. I typically associate that (and "marketing" in general, in this context) with more official channels, whether that's official posts or posts by active devs in an official capacity.

                      • csande17 9 hours ago

                        Oh, I didn't realize steveklabnik wasn't an official member of the project anymore (as of 2022 apparently: https://blog.rust-lang.org/2022/01/31/changes-in-the-core-te... ). I do think he still expressed this position back when he was a major public face of the language, but it seems unfair to single him out and dig through his comment history.

                        Rust's marketing is pretty grassroots in general, but even current official sources like https://rust-lang.org/ say things like "Rust’s rich type system and ownership model guarantee memory-safety" that are only true of the vague-ideal "Rust language" and are not true of the type system they actually designed and implemented in the Rust compiler.

                        • aw1621107 7 hours ago

                          Yeah, Steve has been "just" a well-informed third party for a while now. I would be curious if he has commented on that specific issue before; usually when unsoundness comes up it's cve-rs which is mentioned.

                          > but even current official sources like https://rust-lang.org/ say things like "Rust’s rich type system and ownership model guarantee memory-safety" that are only true of the vague-ideal "Rust language" and are not true of the type system they actually designed and implemented in the Rust compiler.

                          That's an understandable point, though I think something similar would arguably still apply even if Rust had a "proper" spec since a "proper" spec doesn't necessarily rule out underspecification/omissions/mistakes/etc, both in the spec and in the implementation. A "real" formal spec à la WebAssembly might solve that issue, but given the lack of time/resources for a "normal" spec at the time a "real" one would have been a pipe dream at best.

                          That being said, I think it's an interesting question as to what should be done if/when you discover an issue like the trait coherence one, whether you have a spec or not. "Aspirational" marketing doesn't exactly feel nice, but changing your marketing every time you discover/fix a bug also doesn't exactly feel nice for other reasons.

                          Bit of a fun fact - it appears that the particular trait coherence issue actually has existed in some form since Rust 1.0, and was only noticed a few years later when the issue was filed. Perhaps a proper specification effort would have caught it (especially since one of the devs said they had concerns when implementing a relevant check), but given it had taken that long to discover I wouldn't be too surprised if it would have been missed anyway.

                          • csande17 5 hours ago

                            I agree that it's a tough situation. "The type system guarantees memory safety" is an extremely important pillar of Rust's identity. They kind of have to portray all soundness issues as "more compiler bugs than something broken in the language itself" (see eg https://news.ycombinator.com/item?id=21930599 which references a GitHub label that AIUI would've included the trait coherence thing at the time) to keep making that claim. It is a core part of the marketing strategy.

                            • steveklabnik a few seconds ago

                              Yes, so there's a few things going on here: the first is, I absolutely pattern matched on the cve-rs link. Most people bringing that up are trying to bring up a quick gotcha. I did not follow the first link, I assumed it was to a random I-unsound issue. I am not educated on that specific bug at all.

                              I still ultimately think that the framing of Rust being any different than other languages here is actively trying to read the worst into things; Rust is working on having a spec, and formally proving things out. This takes a long time. But it's still ongoing. That doesn't mean Rust marketing relies on lying, I don't think most people even understand "soundness" at all, let alone assume that when Rust says "there's no UB in safe code" or similar that there's a promise of zero soundness bugs or open questions. That backwards incompatible changes are made in spite of breaking code at times to fix soundness issues is an acknowledgement of how sometimes there are in fact bugs, this doesn't change that for virtually all Rust users most of the time, updating the compiler is without fanfare, and so in practice, it is backwards compatible. I have heard of people struggling to update their C or C++ compilers to new standards, that doesn't mean that those languages are horribly backwards incompatible, just that there is a spectrum here, and being on one side of it as close as realistically possible doesn't mean that it's a lie.

                              But, regardless of all of that, it does appear that the issue you linked specifically may be not just a bug, but a real issue. That's my bad, and I'll try to remember that specific bug in the future.

      • QuiEgo 15 hours ago

        > Rust has had ten years since 1.0. It changes in backward compatible ways. For some people, they want no changes at all, so it’s important to nail down which sense is meant.

        I’d love to see rust be so stable that MSRV is an anachronism. I want it to be unthinkable you wouldn’t have any reason not to support Rust from forever ago because the feature set is so stable.

      • wrs 16 hours ago

        For a little more color on 5, as a user of no_std Rust on embedded processors I use crates like heapless or trybox that provide Vec, String, etc. APIs like the std ones, but fallible.

        Of course, two libraries that choose different no_std collection types can't communicate...but hey, we're comparing to C here.

        • dathinab 14 hours ago

          even OOM isn't that different

          like there are some things you can well in C

          and this things you can do in rust too, through with a bit of pain and limitations to how you write rust

          and then there is the rest which looks "hard but doable" in C, but the more you learn about it the more it's a "uh wtf. nightmare" case where "let's kill+restart and have robustness even in presence of the process/error kernel dying" is nearly always the right answer.

    • casparvitch 17 hours ago

      Why can't `if condition { panic(err) }' be used in go as an assert equivalent?

      • Jtsummers 13 hours ago

        Because C's assert gets compiled out if you have NDEBUG defined in your program. How do you do conditional compilation in Go (at the level of conditionally including or not including a statement)?

        • echoangle 7 hours ago

          > How do you do conditional compilation in Go (at the level of conditionally including or not including a statement)?

          https://stackoverflow.com/questions/36703867/golang-preproce...

          Wouldn't this work? Surely the empty function would be removed completely during compilation?

          • Jtsummers 4 hours ago

            That builds or doesn't an entire file. Assert works as a statement. There is not an equivalent in Go to conditionally removing just a statement in a function based on a compile time option.

            • echoangle 3 hours ago

              Can’t you include or not include a function that contains a single assert, and depending on the condition, the function call is removed or included?

              • Jtsummers 3 hours ago

                That defeats the point of asserts. Now you have two copies to keep in sync with each other, whereas asserts are inline with the rest of your code and you have one file that can be built with or without them. They could use a separate tool to produce the assert free version, but that adds tooling beyond what Go provides. Nearly every mainstream language allows you to do this without any extra steps, except Go.

        • casparvitch 7 hours ago

          Ah apologies I misunderstood, thanks

  • dathinab 15 hours ago

    It's kinda sad to read as most of their arguments might seem right at first but if put under scrutiny really fall apart.

    Like why defend C in 2025 when you only have to defend C in 2000 and then argue you have a old, stable, deeply tested, C code base which has no problem with anything like "commonly having memory safety issues" and is maintained by a small group of people very highly skilled in C.

    Like that argument alone is all you need, a win, simple straight forward, hard to contest.

    But most of the other arguments they list can be picked apart and are only half true.

    • mungaihaha 12 hours ago

      > But most of the other arguments they list can be picked apart and are only half true

      I'd like to see you pick the other arguments apart

      • scuff3d 11 hours ago

        > Other programming languages sometimes claim to be "as fast as C". But no other language claims to be faster than C for general-purpose programming, because none are.

        Not OP, And I'm not really arguing with the post, but this struck me as a really odd thing to include in the article. Of course nothing is going to be faster then C, because it compiles straight to machine code with no garbage collection. Literally any language that does the same will be the same speed but not faster, because there's no way to be faster. It's physically impossible.

        A much better statement, and one inline with the rest of the article, would be that at the time C and C++ were really the only viable languages that gave them the performance they wanted, and C++ wouldn't have given them the interoperability they wanted. So their only choice was C.

        • tialaramex 8 hours ago

          "Because none are" is a particularly hollow claim because to support it you have to caveat things so heavily.

          You have to say OK, I allow myself platform specific intrinsics and extensions even though those aren't standard ISO C, and that includes inline assembler. I can pick any compiler and tooling. And I won't count other languages which are transpiled to C for portability because hey in theory I could just write that C myself, couldn't I so they're not really faster.

          At the end you're basically begging the question. "I claim C is fastest because I don't count anything else as faster" which is no longer a claim worth disputing.

          The aliasing optimisations in Fortran and Rust stand out as obvious examples where to get the same perf in C requires you do global analysis (this is what Rust is side-stepping via language rules and the borrowck) which you can't afford in practice.

          But equally the monomorphisation in C++ or Rust can be beneficial in a similar way, you could in principle do all this by hand in your C project but you won't, because time is finite, so you live without the optimisations.

        • aw1621107 9 hours ago

          I think one additional factor that should be taken into account is the amount of effort required to achieve a given level of performance, as well as what extensions you're willing to accept. C with potentially non-portable constructs (intrinsics, inline assembly, etc.) and an unlimited amount of effort put into it provides a performance ceiling, but it's not inconceivable that other programming languages could achieve an equal level of performance with less effort, especially if you compare against plain standard C. Languages like ISPC that expose SIMD/parallelism in a more convenient manner is one example of this.

          Another somewhat related example is Fortran and C, where one reason Fortran could perform better than C is the restrictions Fortran places on aliasing. In theory, one could use restrict in C to replicate these aliasing restrictions, but in practice restrict is used fairly sparingly, to the point that when Rust tried to enable its equivalent it had to back out the change multiple times because it kept exposing bugs in LLVM's optimizer.

    • Deanoumean 15 hours ago

      The argument you propose only works for justifying a maintenance mode for and old codebase. If you want to take the chance to turn away new developers from complex abominations like C++ and Rust and garbage collected sloths like Java and get them to consider a comparatively simple but ubiquitous language that is C, you have to offer more.

      • dangus 15 hours ago

        Is SQLite looking for new developers? Will they ever need a large amount of developers like a mega-corp that needs to hire 100 React engineers?

        • metaltyphoon 14 hours ago

          No, but as morbid as this sounds, the three(?) devs one day will pass away so now what?

          • sgbeal 4 hours ago

            > No, but as morbid as this sounds, the three(?) devs...

            Two full-time core devs and three part-time "peripheral" devs.

            > ... one day will pass away ...

            And not a one of us are young :/.

          • dangus 3 hours ago

            Well the point is that it’s not hard to find 3 people who are C experts. Yes, even young ones.

          • hoppp 14 hours ago

            Then the rights will be sold to a FAANG or an open souce fork like libSQL will live on.

            • colejohnson66 4 hours ago

              SQLite is public domain (as much as is legally possible). So there's no "rights" to "sell" except the trademark.

              • metaltyphoon 32 minutes ago

                The testing suite is not open, which is one of the most important part of the project.

    • cmrx64 15 hours ago

      (it’s from 2017)

    • skywhopper 6 hours ago

      I assume they have written this extensive document with lots of details in response to two and a half decades of thousands of “why not rewrite in X?” questions they’ve had to endure.

  • 1vuio0pswjnm7 an hour ago

    Why doesn't ON CONFLICT(column_name) accept multiple arguments, i.e., multiple columns

    One stupid workaround is combining multiple columns into one, with values separated by a space, for example. This works when each column value is always a string containing no spaces

    Another stupid workaround, probably slower, might be to hash the multiple columns into a new column and use ON CONFLICT(newcolumn_name)

  • sema4hacker 16 hours ago

    "Why SQLite is coded in C..." is an explanation, as documented at sqlite.org.

    "Why is SQLite coded in C and not Rust?" is a question, which immediately makes me want to ask "Why do you need SQLite coded in Rust?".

  • unsungNovelty 13 hours ago

    As I write more code, use more software and read about rewrites...

    The biggest gripe I have with a rewrite is... A lof of the time we rewrite for feature parity. Not the exact same thing. So you are kind ignoring/missing/forgetting all those edge cases and patches that were added along the way for so many niche or otherwise reasons.

    This means broken software. Something which used to work before but not anymore. They'll have to encounter all of them again in the wild and fix it again.

    Obviously if we are to rewrite an important piece of software like this, you'd emphasise more on all of these. But it's hard for me to comprehend whether it will be 100%.

    But other than sqlite, think SDL. If it is to be rewritten. It's really hard for me to comprehend that it's negligible in effect. Am guessing horrible releases before it gets better. Users complaining for things that used work.

    C is going to be there long after the next Rust is where my money is. And even if Rust is still present, there would be a new Rust then.

    So why rewrite? Rewrites shouldn't be the default thinking no?

  • wodenokoto 17 hours ago

    I think it’s more interesting that DuckDB is written in C++ and not rust than SQLite.

    SQLite is old, huge and known for its gigantic test coverage. There’s just so much to rewrite.

    DuckDB is from 2019, so new enough to jump on the “rust is safe and fast”

    • tomjakubowski 12 hours ago

      If I'm remembering a DuckDB talk I attended correctly, they chose C++ because they were most confident in their ability to write clear code in it which would be autovectorized by the compilers they were familiar with. Rust in 2019 didn't have a clear high level SIMD story yet and the developers (wisely) did not want to maintain handrolled SIMD code.

    • jandrewrogers 17 hours ago

      If maximum performance is a top objective, it is probably because C++ produces faster binaries with less code. Modern C++ specifically also has a lot of nice compile-time safety features, especially for database-like code.

      • wodenokoto 14 hours ago

        I can’t verify those claims one way or another, but I’m interested to hear why they were downvoted.

    • tonyhart7 17 hours ago

      if they write it on modern C++ then its alright tbh

  • Jtsummers 18 hours ago

    Two previous, and substantial, discussions on this page:

    https://news.ycombinator.com/item?id=28278859 - August 2021

    https://news.ycombinator.com/item?id=16585120 - March 2018

    • bravura 16 hours ago

      I'm curious about tptacek's comment (https://news.ycombinator.com/item?id=28279426). 'the "security" paragraphs in this page do the rest of the argument a disservice. The fact is, C is a demonstrable security liability for sqlite.'

      The current doc no longer has any paragraphs about security, or even the word security once.

      The 2021 edition of the doc contained this text which no longer appears: 'Safe languages are often touted for helping to prevent security vulnerabilities. True enough, but SQLite is not a particularly security-sensitive library. If an application is running untrusted and unverified SQL, then it already has much bigger security issues (SQL injection) that no "safe" language will fix.

      It is true that applications sometimes import complete binary SQLite database files from untrusted sources, and such imports could present a possible attack vector. However, those code paths in SQLite are limited and are extremely well tested. And pre-validation routines are available to applications that want to read untrusted databases that can help detect possible attacks prior to use.'

      https://web.archive.org/web/20210825025834/https%3A//www.sql...

  • daxfohl 16 hours ago

    It sounds like the core doesn't even allocate, and presumably the extended library allocates in limited places using safe patterns. So there wouldn't be much benefit from Rust anyway, I'd think. Had SQLite ever had a memory leak or use-after-delete bug on a production release? If so, that answers the question. But I've never heard of one.

    Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.

    • thinkharderdev 15 hours ago

      > Also, does it use doubly linked lists or graphs at all? Those can, in a way, be safer in C since Rust makes you roll your own virtual pointer arena.

      You can implement a linked list in Rust the same as you would in C using raw pointers and some unsafe code. In fact there is one in the standard library.

    • steveklabnik 15 hours ago

      Rust’s memory safety guarantees aren’t exclusive to hep allocation. In fact, the language doesn’t heap allocate at all.

      You can write a linked list the same way you would in C if you wish.

    • dathinab 14 hours ago

      > Had SQLite ever had a memory leak or use-after-delete bug on a production release?

      sure, it's an old library they had pretty much anything (not because they don't know what they are doing but because shit happens)

      lets check CVEs of the last few years:

      - CVE-2025-29088 type confusion

      - CVE-2025-29087 out of bounds write

      - CVE-2025-7458 integer overflow, possible in optimized rust but test builds check for it

      - CVE-2025-6965 memory corruption, rust might not have helped

      - CVE-2025-3277 integer overflow, rust might have helped

      - CVE-2024-0232 use after free

      - CVE-2023-36191 segmentation violation, unclear if rust would have helped

      - CVE-2023-7104 buffer overflow

      - CVE-2022-46908 validation logic error

      - CVE-2022-35737 array bounds overflow

      - CVE-2021-45346 memory leak

      ...

      as you can see the majority of CVEs of sqlite are much less likely in rust (but a rust sqlite impl. likely would use unsafe, so not impossible)

      as a side note there being so many CVEs in 2025 seem to be related to better some companies (e.g. Google) having done quite a bit of fuzz testing of SQLite

      other takeaways:

      - 100% branch coverage is nice, but doesn't guarantee memory soundness in C

      - given how deeply people look for CVEs in SQLite the number of CVEs found is not at all as bad as it might look

      but also one final question:

      SQLite uses some of the best C programmers out there, only they merge anything to the code, it had very limited degree of change compared to a typical company project. And we still have memory vulnerabilities. How is anyone still arguing for C for new projects?

      • daxfohl an hour ago

        Wow that's a great analysis!

        Yeah I essentially agree. I'm sure there are still plenty of good cases for C, depending on project size, experience of the engineers, integration with existing libraries, target platform, etc. But it definitely seems like Rust would be the better option in scenarios where there's not some a priori thing that strongly skews toward or forces C.

      • oguz-ismail 12 hours ago

        > How is anyone still arguing for C for new projects?

        It just works

        • krior 11 hours ago

          That list alone sounds like it does not work.

          • uecker 10 hours ago

            As long as it is possible to produce a OOB in something as simple as a matrix transpose, Rust also does not work: https://rustsec.org/advisories/RUSTSEC-2023-0080.html.

            • dwattttt 7 hours ago

              While a package with 10 million all-time downloads is nothing to sneeze at, it's had one memory corruption bug reported in its ~7 year life.

              It's being compared to a C library that's held to extremely high standards, yet this year had two integer overflow CVEs and two other memory corruption CVEs.

              SQLite is a lot more code, but it's also been around a lot longer.

              • uecker 44 minutes ago

                The point is that matrix transpose should be trivial. But my main point really is that looking at CVEs is just nonsense. In both cases it is is a rather meaningless.

  • vincent-manis 17 hours ago

    The point about bounds checking in `safe' languages is well taken, it does prevent 100% test coverage. As we all agree, SQLite has been exhaustively tested, and arguments for bounds checking in it are therefore weakened. Still, that's not an argument for replicating this practice elsewhere, not unless you are Dr Hipp and willing to work very hard at testing. C.A.R. Hoare's comment on eliminating runtime checks in release builds is well-taken here: “What would we think of a sailing enthusiast who wears his life-jacket when training on dry land but takes it off as soon as he goes to sea?”

    I am not Dr Hipp, and therefore I like run-time checks.

  • slashdev 17 hours ago

    This is ignoring the elephant in the room: SQLite is being rewritten in Rust and it's going quite well. https://github.com/tursodatabase/turso

    It has async I/O support on Linux with io_uring, vector support, BEGIN CONCURRENT for improved write throughput using multi-version concurrency control (MVCC), Encryption at rest, incremental computation using DBSP for incremental view maintenance and query subscriptions.

    Time will tell, but this may well be the future of SQLite.

    • 3eb7988a1663 16 hours ago

      It should be noted that project has no affiliation with the SQLite project. They just use the name for promotional/aspirational purposes. Which feels incredibly icky.

      Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.

      • slashdev 15 hours ago

        > They just use the name for promotional/aspirational purposes. Which feels incredibly icky.

        The aim is to be compatible with sqlite, and a drop-in replacement for it, so I think it's fair use.

        > Also, this is a VC backed project. Everyone has to eat, but I suspect that Turso will not go out of its way to offer a Public Domain offering or 50 year support in the way that SQLite has.

        It's MIT license open-source. And unlike sqlite, encourages outside contribution. For this reason, I think it can "win".

        • blibble 6 hours ago

          > The aim is to be compatible with sqlite, and a drop-in replacement for it, so I think it's fair use.

          try marketing your burger company as "The Next Evolution of McDonalds" and see what happens

        • frumplestlatz 9 hours ago

          Calling it “SQLite-compatible” would be one thing. That’s not what they do. They describe it as “the evolution of SQLite”.

          It’s absolutely inappropriate and appropriative.

          They’ve been poor community members from the start when they publicized their one-sided spat with SQLite over their contribution policy.

          The reality is that they are a VC-funded company focused on the “edge database” hypetrain that’s already dying out as it becomes clear that CAP theorem isn’t something you can just pretend doesn’t exist.

          It’ll very likely be dead in a few years, but even if it’s not, a VC-funded project isn’t a replacement for SQLite. It would take incredibly unique advantages to shift literally the entire world away from SQLite.

          It’s a new thing, not the next evolution of SQLite.

    • assimpleaspossi 17 hours ago

      >>SQLite is being rewritten in Rust

      SQLite is NOT being rewritten in Rust!

      >>Turso Database is an in-process SQL database written in Rust, compatible with SQLite.

      • slashdev 15 hours ago

        It's a ground up rewrite. It's not an official rewrite, if that's what you mean. Words are hard.

        • stoltzmann 10 hours ago

          So a reimplementation, not a rewrite.

    • blibble 16 hours ago

      > Time will tell, but this may well be the future of SQLite.

      turdso is VC funded so will probably be defunct in 2 years

      • daxfohl 16 hours ago

        Or, so it's being written mostly by AI.

      • slashdev 15 hours ago

        Could also be an outcome. It is MIT open-source though.

    • lionkor 17 hours ago

      So they have much worse test coverage than sqlite

    • zvmaz 17 hours ago

      In the link you provided, this is what I read: "An in-process SQL database, compatible with SQLite."

      Compatible with SQLite. So it's another database?

      • simonw 17 hours ago

        Yeah, I don't think it even counts as a fork - it's a ground-up re-implementation which is already adding features that go beyond the original.

      • ForHackernews 8 hours ago

        It's a fork and a rewrite.

    • tonyhart7 17 hours ago

      so its sqlite++ since they added bunch of things on top of that

    • metaltyphoon 17 hours ago

      The moment turso becomes stable , SQLite will inevitably fade away with time if they don’t rethink how contributions should be taken. I honestly believe the Linux philosophy of software development will be what catapults turso forward.

  • matt3210 17 hours ago

    I can compile c anywhere and for any processor, which can’t be said for rust

  • steeleduncan 8 hours ago

    > SQLite could be recoded in Go

    Sqlite has been recoded (automatically) in Go a while ago [1], and it is widely deployed

    > would probably introduce far more bugs than would be fixed

    It runs against the same test suite with no issues

    > and it may also result in slower code

    It is quite a lot slower, but it is still widely used as it turns out that the convenience of a native port outweighs the performance penalty in most cases.

    I don't think SQLite should be rewritten in Go, Rust, Zig, Nim, Swift ... but ANSI C is a subset of the feature set of most modern programming languages. Projects such as this could be written and maintained in C indefinitely, and be automatically translated to other languages for the convenience of users in those languages

    [1] https://pkg.go.dev/modernc.org/sqlite

    • sgbeal 6 hours ago

      > It runs against the same test suite with no issues

      It runs against the same public test suite. The proprietary test suite is much more intensive.

    • sim7c00 8 hours ago

      > would probably introduce far more bugs than would be fixed

      It runs against the same test suite with no issues

      - that proves nothing about bugs existing or not.

    • ChrisRR 7 hours ago

      > It runs against the same test suite with no issues

      That doesn't guarantee no bugs. It just means that the existing behaviour covered by the tests is still the same. It may introduce new issues in untested edge cases or performance issues

  • mikece 20 hours ago

    The fact that a C library can easily be wrapped by just about any language is really useful. We're considering writing a library for generating a UUID (that contains a key and value) for reasons that make sense to us and I proposed writing this in C so we could simply wrap it as a library for all of the languages we use internally rather than having to re-implement it several times. Not sure if we'll actually build this library but if we do it will be in C (I did managed to get the "wrap it for each language" proposal pre-approved).

    • 01HNNWZ0MV43FF 17 hours ago

      It is. You can also write it in C++ or Rust and expose a C API+ABI, and then you're distributing a binary library that the OS sees as very similar to a C library.

      Occasionally when working in Lua I'd write something low-level in C++, wrap it in C, and then call the C wrapper from Lua. It's extra boilerplate but damn is it nice to have a REPL for your C++ code.

      Edit: Because someone else will say it - Rust binary artifacts _are_ kinda big by default. You can compile libstd from scratch on nightly (it's a couple flags) or you can amortize the cost by packing more functions into the same binary, but it is gonna have more fixed overhead than C or C++.

      • bsder 10 hours ago

        > It is. You can also write it in C++ or Rust and expose a C API+ABI, and then you're distributing a binary library that the OS sees as very similar to a C library.

        If I want a "C Library", I want a "C Library" and not some weird abomination that has been surgically grafted to libstdc++ or similar (but be careful of which version as they're not compatible and the name mangling changes and ...).

        This isn't theoretical. It's such a pain that the C++ folks started resorting to header-only libraries just to sidestep the nightmare.

        • uecker 10 hours ago

          Rust libraries also impose an - in my opinion - unacceptable burden to the open source ecosystem: https://www.debian.org/releases/trixie/release-notes/issues....

          This makes me less safe rather than more. Note that there is a substantial double standard here, we could never in the name of safety impose this level of burden from C tooling side because maintainers would rightfully be very upset (even toggling a warning in the default set causes discussions). For the same reason it should be unacceptable to use Rust before this is fixed, but somehow the memory safety absolutists convinced many people that this is more important than everything else. (I also think memory safety is important, but I can't help but thinking that pushing for Rust is more harmful to me than good. )

        • pjmlp 3 hours ago

          As someone that also cares about C++, header-only libraries are an abomination from folks that think C and C++ are scripting languages.

    • mellinoe 17 hours ago

      You can expose a C interface from many languages (C++, Rust, C# to name a few that I've personally used). Instead of introducing a new language entirely, it's probably better to write the library in one of the languages you already use.

  • psyclobe 15 hours ago

    SQLite is a true landmark, c not withstanding it just happened to be the right tool at the right time and by now anything else is well not as interesting as what they have going on now; totally bucks the trend of throw away software.

  • kazinator 16 hours ago

    > The C language is old and boring. It is a well-known and well-understood language.

    So you might think, but there is a committee actively undermining this, not to mention compiler people keeping things exciting also.

    There is a dogged adherence to backward compatibility, so that you can't pretend C has not gone anywhere in thirty-five years, if you like --- provided you aren't invoking too much undefined behavior. (You can't as easily pretend that your compiler has not gone anywhere in 35 years with regard to things you are doing out of spec.)

  • pizlonator 17 hours ago

    SQLite works great in Fil-C with minimal changes.

    So, the argument for keeping SQLite written in C is that it gives the user the choice to either:

    - Build SQLite with Yolo-C, in which case you get excellent performance and lots of tooling. And it's boring in the way that SQLite devs like. But it's not "safe" in the sense of memory safe languages.

    - Build SQLite with Fil-C, in which case you get worse (but still quite good) performance and memory safety that exceeds what you'd get with a Rust/Go/Java/whatever rewrite.

    Recompiling with Fil-C is safer than a rewrite into other memory safe languages because Fil-C is safe through all dependencies, including the syscall layer. Like, making a syscall in Rust means writing some unsafe code where you could screw up buffer sizes or whatnot, while making a syscall in Fil-C means going through the Fil-C runtime.

  • pm2222 19 hours ago

    These points strike me:

      Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.
    
      Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
    
      Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
    • steveklabnik 18 hours ago

      If the branch is never taken, and the optimizer can prove it, it will remove the check. Sometimes if it can’t actually prove it there’s ways to help it understand, or, in the almost extreme case, you do what I commented below.

      • sedatk 17 hours ago

        Yeah I don't understand the argument. If you can't convince the compiler that that branch will never be taken, then I strongly suspect that it may be taken.

        • compiler-guy 15 hours ago

          A program can have many properties that the compiler cannot prove statically. To take a very basic case, the halting problem.

        • unclad5968 16 hours ago

          That's not the point. The point is that if it is never taken, you can't test it. They don't care that it inserts a conditional OP to check, they care that they can't test the conditional path.

          • sedatk 16 hours ago

            But, there is no conditional path when the type system can assure the compiler that there is nothing to be conditional about. Do they mean that it's impossible to be 100% sure about if there's a conditional path or not?

    • rstuart4133 19 hours ago

      > Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy.

      This is annoying in Rust. To me array accesses aren't the most annoying, it's match{} branches that will never been invoked.

      There is unreachable!() for such situations, and you would hope that:

          if array_access_out_of_bounds { unreachable!(); }
      
      is recognised by the Rust tooling and just ignored. That's effectively the same as SQLite is doing now by not doing the check. But it isn't ignored by the tooling: unreachable!() is reported as a missed line. Then there is the test code coverage including the standard output by default, and you have to use regex's on path names to remove it.
      • steveklabnik 18 hours ago

        A more direct translation of the sqlite strategy here is to use get_unchecked instead of [], and then you get the same behaviors.

        Your example does what [] does already, it’s just a more verbose way of writing the same thing. It’s not the same behavior as sqlite.

    • pella 17 hours ago

      Turso:

      https://algora.io/challenges/turso "Turso is rewriting SQLite in Rust ; Find a bug to win $1,000"

      ------

      - Dec 10, 2024 : "Introducing Limbo: A complete rewrite of SQLite in Rust"

      https://turso.tech/blog/introducing-limbo-a-complete-rewrite...

      - Jan 21, 2025 - "We will rewrite SQLite. And we are going all-in"

      https://turso.tech/blog/we-will-rewrite-sqlite-and-we-are-go...

      - Project: https://github.com/tursodatabase/turso

      Status: "Turso Database is currently under heavy development and is not ready for production use."

      • a-dub 16 hours ago

        sqlite3 has one (apparently this is called "the amalgamation") c source file that is ~265 kloc (!) long with external dependencies on zlib, readline and ncurses. built binaries are libsqlite3.so at 4.8M and sqlite3 at 6.1M.

        turso has 341 rust source files spread across tens of directories and 514 (!) external dependencies that produce (in release mode) 16 libraries and 7 binaries with tursodb at 48M and libturso_sqlite3.so at 36M.

        looks roughly an order of magnitude larger to me. it would be interesting to understand the memory usage characteristics in real-world workloads. these numbers also sort of capture the character of the languages. for extreme portability and memory efficiency, probably hard to beat c and autotools though.

        • csande17 2 hours ago

          I don't think the SQLite authors actually edit the single giant source file directly. Their source control repository has the code split up into many separate files, which are combined into "the amalgamation" by a build script: https://github.com/sqlite/sqlite/tree/master/src

          • a-dub an hour ago

            yeah i saw that afterwards. they do it to squeeze more optimization out of the compiler by putting everything in one compilation unit. given the prominence of the library i have to wonder if this was an input to zig's behind-the-scenes single compilation unit design choice...

    • 01HNNWZ0MV43FF 17 hours ago

      But if you don't have the bounds checks in machine code, then you don't have bounds checks.

      I suppose SQLite might use a C linter tool that can prove the bounds checks happen at a higher layer, and then elide redundant ones in lower layers, but... C compilers won't do that by default, they'll just write memory-unsafe machine code. Right?

  • 6r17 7 hours ago

    If I remember correctly most of SQLite "closed-source" leverage comes from the test-suite - which probably cannot transpose to another language as easily. Ultimately there are already other solutions coming up re-writing it in rust or go.

  • dusted 5 hours ago

    In my opinion, you don't get to ask "why is X done by Y" before you've done X yourself by something not Y and not Y by proxy either.

  • MomsAVoxell 4 hours ago

    Back in the good ol'/bad ol' days of the very early web/Internet, I had the fortune of working with someone who, lets say, has kind of a background in certain operating systems circles.

    Not only had this fellow built a functional ISP in one of the toughest markets (at that time), in the world - but he'd also managed to build the database engine and quite a few of the other tools that ran that ISP, and was in danger of setting a few standards for a few things which, since then, have long since settled out, but .. nevertheless .. it could've been.

    Anyway, this fellow wrote everything in C. His web page, his TODO.h for the day .. he had C-based tools for managing his docs, for doing syncs between various systems under his command (often in very far-away locations, and even under water a couple times) .. everything, in C.

    The database system he wrote in pure C was, at the time, quite a delight. It gave a few folks further up the road a bit of a tight neck.

    He went on to do an OS, because of course he did.

    Just sayin', SQLite devs aren't the only ones who got this right. ;)

  • Havoc 16 hours ago

    For a project that is functionally “done” switching doesn’t make sense. Something like kernel code where you know it’ll continue to evolve - there going through the pain may be worth it

  • firesteelrain 17 hours ago

    One thing I found especially interesting is the section at the end about why Rust isn’t used. It leaves open the door and at least is constructive feedback to the Rust community

  • jokoon 17 hours ago

    I wonder if the hype helps rust being a better language

    At this point I wish the creators of the language could talk about what rust is bad at.

    • steveklabnik 17 hours ago

      Folks involved often do! Talking about what’s not great is the only path towards getting better, because you have to identify pain points in order to fix them.

      • estebank 16 hours ago

        I would go as far as saying that 90% of managing the project is properly communicating, discussing and addressing the ways in which Rust sucks. The all-hands in NL earlier this year was wall to wall meetings about how much things suck and what to do about them! I mean this in the best possible way. ^_^

  • deanebarker 10 hours ago

    It's hard to argue with success. SQLite's pervasiveness is kind of a royal flush.

  • a-saleh 9 hours ago

    Ok, I didn't expect such a high praise for rust. I am not joking.

  • morshu9001 15 hours ago

    This is what I expected. Rust is the first thing that has been worth considering as a C replacement. C++ wasn't.

  • belter 6 hours ago

    Some of the most interesting comments are out of: "3. Why Isn't SQLite Coded In A "Safe" Language?"

    "....Safe languages insert additional machine branches to do things like verify that array accesses are in-bounds. In correct code, those branches are never taken. That means that the machine code cannot be 100% branch tested, which is an important component of SQLite's quality strategy..."

    "...Safe languages usually want to abort if they encounter an out-of-memory (OOM) situation. SQLite is designed to recover gracefully from an OOM. It is unclear how this could be accomplished in the current crop of safe languages..."

  • plainOldText 19 hours ago

    I’d be curious to know what the creators of SQLite would have to say about Zig.

    Zig gives the programmer more control than Rust. I think this is one of the reasons why TigerBeetle is written in Zig.

    • metaltyphoon 17 hours ago

      > Zig gives the programmer more control than Rust

      More control over what exactly? Allocations? There is nothing Zig can do that Rust can’t.

      • array_key_first 13 hours ago

        > More control over what exactly? Allocations? There is nothing Zig can do that Rust can’t.

        I mean yeah, allocations. Allocations are always explicit. Which is not true in C++ or Rust.

        Personally I don't think it's that big of a deal, but it's a thing and maybe some people care enough.

        • aw1621107 12 hours ago

          > Which is not true in [] Rust.

          ...If you're using the alloc/std crates (which to be fair, is probably the vast majority of Rust devs). libcore and the Rust language itself do not allocate at all, so if you use appropriate crates and/or build on top of libcore yourself you too can have an explicit-allocation Rust (though perhaps not as ergonomic as Zig makes it).

      • Cloudef 17 hours ago

        I think zig generally composes better than rust. With rust you pretty much have to start over if you want reusable / composable code, that is not use the default std. Rust has small crates for every little thing because it doesn't compose well, as well to improve compile times. libc in the default std also is major L.

        • metaltyphoon 17 hours ago

          > I think zig generally composes better than rust.

          I read your response 3 times and I truly don't know what you mean. Mind explaining with a simple example?

          • Cloudef 16 hours ago

            It mainly comes down how the std is designed. Zig has many good building blocks like allocators, and how every function that allocates something takes one. This allows you to reuse the same code for different kind of situations.

            Hash maps in zig std are another great example, where you can use adapter to completely change how the data is stored and accessed while keeping the same API [1]. For example to have map with limited memory bound that automatically truncates itself, in rust you need to either write completely new data structure for this or rely on someone's crate again (indexmap).

            Errors in zig compose also better, in rust I find error handling really annoying. Anyhow makes it better for application development but you shouldn't use it if writing libraries.

            When writing zig I always feel like I can reuse pieces of existing code by combining the building blocks at hand (including freestanding targets!). While in rust I always feel like you need go for the fully tailored solution with its own gotchas, which is ironic considering how many crates there are and how many crates projects depend on vs. typical zig projects that often don't depend on lots of stuff.

            1: https://zig.news/andrewrk/how-to-use-hash-map-contexts-to-sa...

    • Jtsummers 19 hours ago

      > Nearly all systems have the ability to call libraries written in C. This is not true of other implementation languages.

      From section "1.2 Compatibility". How easy is it to embed a library written in Zig in, say, a small embedded system where you may not be using Zig for the rest of the work?

      Also, since you're the submitter, why did you change the title? It's just "Why is SQLite Coded in C", you added the "and not Rust" part.

      • plainOldText 19 hours ago

        The article allocates the last section to explaining why Rust is not a good fit (yet) so I wanted the title to cover that part of the conversation since I believe it is meaningful. It illustrates the tradeoffs in software engineering.

    • ginko 8 hours ago

      I'm generally a fan of Zig but it's in no way stable enough to write something like sqlite in it.

  • dgfitz 17 hours ago

    > Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.

    Talking about C99, or C++11, and then “oh you need the nightly build of rust” were juxtaposed in such a way that I never felt comfortable banging out “yum install rust” and giving it a go.

    • steveklabnik 17 hours ago

      Other than some operating systems projects, I haven’t run into a “requires nightly” in the wild for years. Most users use the stable releases.

      (There are some decent reasons to use the nightly toolchain in development even if you don’t rely on any unfinished features in your codebase, but that means they build on stable anyway just fine if you prefer.)

      • dgfitz 16 hours ago

        Good to know, maybe I’ll give it a whirl. I’d been under the (mistaken, apparently) impression that if one didn’t update monthly they were going to have a bad time.

        • steveklabnik 16 hours ago

          You may be running into forwards compatibility issues, not backwards compatibility issues, which is what nightly is about.

          The Rust Project releases a new stable compiler every six weeks. Because it is backwards compatible, most people update fairly quickly, as it is virtually always painless. So this may mean, if you don’t update your compiler, you may try out a new package version and it may use features or standard library calls that don’t exist in the version you’re using, because the authors updated regularly. There’s been some developments in Cargo to try and mitigate some of this, but since it’s not what the majority of users do, it’s taken a while and those features landed relatively recently, so they’re not widely adopted yet.

          Nightly features are ones that aren’t properly accepted into the language yet, and so are allowed to break in backwards incompatible ways at any time.

          • uecker 10 hours ago

            But the original point "C99 vs something later" is also about forward compatibility issues.

            • steveklabnik 9 minutes ago

              Sure, I had originally responded to the "needs nightly Rust part" only.

  • ternaryoperator 15 hours ago

    > Recoding SQLite in Go is unlikely since Go hates assert()

    Any idea what this refers to? assert is a macro in C. Is the implication that OP wants the capability of testing conditions and then turning off the tests in a production release? If so, then I think the argument is more that go hates the idea of a preprocessor. Or have I misunderstood the point being made?

  • next_xibalba 15 hours ago

    Aren't SQLite’s bottlenecks primarily io-bound (not CPU)? If so, fopen, fread, or syscalls are the most important to performance and pure language efficiency wouldn't be limiter.

  • system2 16 hours ago

    What's up with SQLite news lately? I feel like I see at least 1-2 posts about it per day.

  • BiraIgnacio 4 hours ago

    "1. C Is Best"

  • binary132 17 hours ago

    I love him so much.

  • tonyhart7 17 hours ago

    because Rust isnt out yet back then????

  • coolThingsFirst 16 hours ago

    I don't want to sound cynical but a lot of it has to deal with the simplicity of the language. It's much harder to find a good Rust engineer than a C one. When all you have is pointers and structs it's much easier to meet the requirements for the role.

  • rednafi 16 hours ago

    Also, Rust needs a better stdlib. A crate for every little thing is kinda nuts.

    One reason I enjoy Go is because of the pragmatic stdlib. On most cases, I can get away without pulling in any 3p deps.

    Now of course Go doesn’t work where you can’t tolerate GC pauses and need some sort of FFI. But because of the stdlib and faster compilation, Go somehow feels lighter than Rust.

    • firesteelrain 15 hours ago

      Rust doesn’t really need a better stdlib as much as a broader one, since it is intentionally narrow. Go’s stdlib includes opinions like net/http and templates that Rust leaves to crates. The trade-off is Rust favors stability and portability at the core, while Go favors out-of-the-box ergonomics. Both approaches work, just for different teams.

    • afdbcreid 14 hours ago

      Is Rust's stdlib worse than C's? It's not an argument here.

    • tonyhart7 15 hours ago

      me when I dont know ball: