They recently landed multi-writer support for their rust SQLite re-implementation, which is personally the biggest issue I've had with using SQLite for high concurrency applications.
In the past I've used the backup API - https://sqlite.org/backup.html - in order to load in memory a copy of sqlite db, and have another live one. I would do this after certain user action, and then by doing a diff, I would know what changed... I guess poor way of implementing PostgreSQL events... but it worked!
Granted it was small DB (few megabytes), I also wanted to avoid collecting changes one by one, I simply wanted a diff over last time.
STRICT tables are something I appreciate very much, even though I cannot recall running into a problem that would have prevented by its presence in the before-time. But it's good to have all the same.
I don't think I've ever done much with SQLite's JSON functions, but I have on one or two occasions used a constraint to enforce a TEXT column contains valid JSON, which would have been very tedious to do otherwise.
The JSON functions are the sleeper hit for me. I've used them extensively in ETL scripts where input data is semi-structured - being able to do json_extract and json_each directly in SQL instead of writing a Python preprocessing step saved a surprising amount of complexity. Strict tables are also worth calling out more. The lack of type enforcement was always the thing that made me reach for PostgreSQL instead, and strict mode closes that gap nicely for smaller projects.
One handy bit of SQLite's JSON features which isn't called out in this article is the short syntax, e.g.
SELECT payload->>'$.user.id' AS user_id FROM events ...
This works a little more consistently than JSON_EXTRACT - the -> operator always returns a JSON string; the ->> operator (used here) always returns a SQL scalar (text, number, or null). More details: https://sqlite.org/json1.html#jptr
Everybody has a JSON extension, and they're all slightly different.
I just got hit badly by Dreamhost, which is still running MySQL 8.0. That version has a "JSON extension" which is almost totally useless. I designed something before discovering how different MySQL 8.4 and MySQL 8.0 are.
I was trying to port a small program I wrote from postgres to a sqlite backend(mainly to make it easier to install) and was pleased to find out sqlite supported "on conflict" I was less pleased to find out that apperently I abuse CTE's to insert foreign keys all the time and sqlite was not happy doing that.
with thing_key as (
insert into item(key, description) values('thing', 'a thing') on conflict do nothing )
insert into user_note(uid, key, note) values (123, 'thing', 'I like this thing') on conflict (uid, thing) do update set note = 'I like this thing');
I've found FTSE5 not useful for serious fuzzy or subword full text search. For example I have documents saying "DaemonSet". But if the user searches for "Daemon" then there will be no results.
I have found this as well, FTSE5 is convenient to have as an option, but it's not as versatile as postgres or sonic or other full-text search solutions.
Does anyone have any other favorite modern bloom-filter-based search solutions that dont need to store copies of all the documents in the search db? Ideally something that can run in WASM too so we can ship a tiny search index to the browser. I found https://github.com/tinysearch/tinysearch but haven't tried it yet.
None of these are news to the HN community. Write-ahead logging and concurrency PRAGMAs have been a given for a decade now. IIRC, FTS5 doesn't often come baked in and you have to compile the SQLite amalgamation to get it. If you do need better typing, you should really use PostgreSQL.
However, I will concede, and the article doesn't mention at all, far less are aware that you can build HA, cross region replicated SQLite using purely OSS software provided you architect your software around it. Now that would be a really good `Modern SQLite: Features You Didn't Know It Had` article!
Another interesting discussion point is how far self hosted PostgreSQL and pgBackRest can get you to a near-zero data loss high RPO, RTO setup. Its simply amazing we can self host all this.
> Write-ahead logging and concurrency PRAGMAs have been a given for a decade now.
All of the listed features except for strict tables and generated columns have been in SQLite for 10+ years, and those two are certainly not new. The JSON APIs were not made part of the standard distribution until 3.38 (2022-02) but were added in 3.9 (2015-10) and widely used long before they were upgraded from an optional extension to a core feature.
Surprised no one has mentioned Turso yet!
They recently landed multi-writer support for their rust SQLite re-implementation, which is personally the biggest issue I've had with using SQLite for high concurrency applications.
`PRAGMA journal_mode = 'mvcc';`
https://docs.turso.tech/tursodb/concurrent-writes
Very excited to see if SQLite responds by adding native support, I'm hoping competition here will spur improvements on both sides.
In the past I've used the backup API - https://sqlite.org/backup.html - in order to load in memory a copy of sqlite db, and have another live one. I would do this after certain user action, and then by doing a diff, I would know what changed... I guess poor way of implementing PostgreSQL events... but it worked!
Granted it was small DB (few megabytes), I also wanted to avoid collecting changes one by one, I simply wanted a diff over last time.
STRICT tables are something I appreciate very much, even though I cannot recall running into a problem that would have prevented by its presence in the before-time. But it's good to have all the same.
I don't think I've ever done much with SQLite's JSON functions, but I have on one or two occasions used a constraint to enforce a TEXT column contains valid JSON, which would have been very tedious to do otherwise.
The JSON functions are the sleeper hit for me. I've used them extensively in ETL scripts where input data is semi-structured - being able to do json_extract and json_each directly in SQL instead of writing a Python preprocessing step saved a surprising amount of complexity. Strict tables are also worth calling out more. The lack of type enforcement was always the thing that made me reach for PostgreSQL instead, and strict mode closes that gap nicely for smaller projects.
One handy bit of SQLite's JSON features which isn't called out in this article is the short syntax, e.g.
This works a little more consistently than JSON_EXTRACT - the -> operator always returns a JSON string; the ->> operator (used here) always returns a SQL scalar (text, number, or null). More details: https://sqlite.org/json1.html#jptrEverybody has a JSON extension, and they're all slightly different.
I just got hit badly by Dreamhost, which is still running MySQL 8.0. That version has a "JSON extension" which is almost totally useless. I designed something before discovering how different MySQL 8.4 and MySQL 8.0 are.
That does seem cool. So there are sufficient functions to iterate through collections that might be stored in a single row of a JSON column?
Yes, using the table-valued JSON functions like JSON_EACH and JSON_TREE (which works recursively). Details: https://sqlite.org/json1.html#table_valued_functions_for_par...
Theres also spellfix1 which is an extension you can enable to get fuzzy search.
And ON CONFLICT which can help dedupe among other things in a simple and performant way.
I was trying to port a small program I wrote from postgres to a sqlite backend(mainly to make it easier to install) and was pleased to find out sqlite supported "on conflict" I was less pleased to find out that apperently I abuse CTE's to insert foreign keys all the time and sqlite was not happy doing that.
I've found FTSE5 not useful for serious fuzzy or subword full text search. For example I have documents saying "DaemonSet". But if the user searches for "Daemon" then there will be no results.
I have found this as well, FTSE5 is convenient to have as an option, but it's not as versatile as postgres or sonic or other full-text search solutions.
Does anyone have any other favorite modern bloom-filter-based search solutions that dont need to store copies of all the documents in the search db? Ideally something that can run in WASM too so we can ship a tiny search index to the browser. I found https://github.com/tinysearch/tinysearch but haven't tried it yet.
None of these are news to the HN community. Write-ahead logging and concurrency PRAGMAs have been a given for a decade now. IIRC, FTS5 doesn't often come baked in and you have to compile the SQLite amalgamation to get it. If you do need better typing, you should really use PostgreSQL.
However, I will concede, and the article doesn't mention at all, far less are aware that you can build HA, cross region replicated SQLite using purely OSS software provided you architect your software around it. Now that would be a really good `Modern SQLite: Features You Didn't Know It Had` article!
Another interesting discussion point is how far self hosted PostgreSQL and pgBackRest can get you to a near-zero data loss high RPO, RTO setup. Its simply amazing we can self host all this.
> Write-ahead logging and concurrency PRAGMAs have been a given for a decade now.
All of the listed features except for strict tables and generated columns have been in SQLite for 10+ years, and those two are certainly not new. The JSON APIs were not made part of the standard distribution until 3.38 (2022-02) but were added in 3.9 (2015-10) and widely used long before they were upgraded from an optional extension to a core feature.
- Generated columns: 3.31 (2020-01)
- Strict tables: 3.37 (2021-11)
There are plenty of people in the HN community who don't know much about SQLite. Tech is a big, huge, enormous, gigantic domain.
I’m afraid you overestimated my knowledge