Transactional Object Storage?

(blog.mbrt.dev)

50 points | by mbrt 8 days ago ago

16 comments

  • jitl 17 minutes ago

    There is also SlateDB, another work in progress take on this. HN link: https://news.ycombinator.com/item?id=41714858

  • victorbjorklund 2 hours ago

    Pretty cool and could be useful for stuff that isnt updated so frequently like a CMS.

  • svrakitin 8 days ago

    Pretty cool! Do you have any ideas already about how to make it work with S3, considering it doesn't support If- headers?

    • boulos 8 days ago
    • mbrt 8 days ago

      I think it's now much easier to achieve than a year ago. The critical one is conditional writes on new objects, because otherwise you can't safely create transaction logs in the presence of timeouts. This is not enough though.

      My approach on S3 would be to ensure to modify the ETag of an object whenever other transactions looking at it must be blocked. This makes it easier to use conditional reads (https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...) on COPY or GET operations.

      For write, I would use PUT on a temporary staging area and then conditional COPY + DELETE afterward. This is certainly slower than GCS, but I think it should work.

      Locking without modifying the object is the part that needs some optimization though.

    • choppaface an hour ago

      Not a full solution, but seeing the OP seeks to be a key-value store (versus full RDBMS? despite the comparisons with Spanner and Postgres?), important to weigh how Rockset (also mainly KV store) dealt with S3-backed caching at scale:

        * https://rockset.com/blog/separate-compute-storage-rocksdb/
      
        * https://github.com/rockset/rocksdb-cloud
      
      Keep in mind Rockset is definitely a bit biased towards vector search use cases.
      • mbrt 16 minutes ago

        Nice, thanks for the reference!

        BTW, the comparison was only to give an idea about isolation levels, it wasn't meant to be a feature-to-feature comparison.

        Perhaps I didn't make it prominent enough, but at some point I say that many SQL databases have key-value stores at their core, and implement a SQL layer on top (e.g. https://www.cockroachlabs.com/docs/v22.1/architecture/overvi...).

        Basically SQL can be a feature added later to a solid KV store as a base.

  • ramesh31 an hour ago

    so... Delta Lake?

  • Onavo 8 days ago

    Congrats on reinventing the data lake? This is actually how most of the newer generations of "cloud native" databases work, where they separate compute and storage. The key is that they have a more sophisticated caching layer so that the latency cost of a query can be amortized across requests.

    • mbrt 8 days ago

      It's my understanding that the newer generation of data lakes still make use of a tiny, strongly consistent metadata database to keep track of what is where. This is orders of magnitudes smaller than what you'd have by putting everything in the same database, but it's still there. This is also the case in newer data streaming platforms (e.g. https://www.warpstream.com/blog/kafka-is-dead-long-live-kafk...).

      I'm curious to hear if you have examples of any database using only object storage as a backend, because back when I started, I couldn't fin any.