Show HN: DuckDB for Kafka Stream Processing

(sql-flow.com)

33 points | by dm03514 3 hours ago ago

11 comments

  • pulkitsh1234 an hour ago

    (not an expert in stream processing).. from the docs here https://sql-flow.com/docs/introduction/basics#output-sink it seems like this works on "batches" of data, how is this different from batch processing ? Where is the "stream" here ?

    • dm03514 23 minutes ago

      Ha Yes! A pipeline assumes a "batch" of data, which is backed by an ephemeral duckdb in memory table. The goal is to provide SQL table semantics and implement pipelines in a way where the batch size can be toggled without a change to the pipeline logic.

      The stream is achieved by the continuous flow of data from Kafka.

      SQLFlow exposes a variable for batch size. Setting the batch size to 1 will make it so SQLFlow reads a kafka message, applies the processor SQL logic and then ensures it successfully commits the SQL results to the sink, one after another.

      SQLFlow provides at least once delivery guarantees. It will only commit the source message once it successfully writes to the pipeline output (sink).

      https://sql-flow.com/docs/operations/handling-errors

      The batch table is just a convention which allows for seamless batch size configuration. If your throughput is low, or if you require message by message processing, SQLFlow can be toggled to a batch of 1. If you need higher throughput and can tolerate the latency, then the batch can be toggled higher.

  • srameshc 2 hours ago

    This looks brilliant, thank you. I love DuckDB and use it for lot of local data processing jobs. We have a data stream, not to the size where we need to push to BigQuery or elsewhere. I was thinking of trying something like sql-flow but I am glad now it makes the job very easy.

  • itsfseven 2 hours ago

    It would be great if this supported Pulsar too!

  • mihevc 2 hours ago

    How does this compare to https://github.com/Query-farm/tributary ?

    • rustyconover 2 hours ago

      The next major release of Tributary will support Avro, Protobuf and JSON along with the Schema Registry it will also bring the ability to write to Kafka with transactions.

      But really you should get excited for DuckDB Labs to build out materialized views. Materialized views where you can ingest more streaming data to update aggregates. This way you could just keep pushing rows through aggregates from Kafka.

      It is going to be a POWER HOUSE for streaming analytics.

      Contact DuckDB Labs if you want to sponsor the work on materialized views: https://duckdb.org/roadmap

    • dm03514 2 hours ago

      Oh yes!! I've seen this a couple times. I am far from an expert in tributary so please take with a grain of salt.

      Based on the tributary documentation, I understand that tributary embeds kafka consumers into duckdb. This makes duckdb the main process that you run to perform consumption. I think that this makes creating stream processing POCs very accessible. It looks like it is quite easy to start streaming data into duckdb. What I don't see is a full story around Devops, operations, testing, configuration as code etc.

      SQLFlow is a service that embeds DuckDB as the storage and processing brains. Because of this, we're able to offer metrics, testing utilities, pipelines as code, and all the other DevOps utilities that are necessary to run a huge number of streaming instances 24x7. SQLFlow was created as a tool that I wish I had to for simple stream processing in production in high availability contexts :)

      • mihevc 2 hours ago

        Nice! Thanks for the context, it's great to know!

  • mbay 2 hours ago

    I see an example with what looks like a lookup-type join against a Postgres DB. Are stream/stream joins supported, though?

    The DLQ and Prometheus integration out of the box are nice.

    • dm03514 2 hours ago

      Stream to stream joins are NOT currently supported. This is a regularly requested feature, and I'll look at prioritizing it.

      SQLFlow uses duckdb internally for windowing and stream state storage :), and I'll look at extending it to support stream / stream joins.

      Could you describe a bit more about your use case? I'd really appreciate it if you could create an issue in the repo describing your use case and desired functionality a bit!

      https://github.com/turbolytics/sql-flow/issues

      We were looking at solving some of the simplier use cases first before branching out into these more complicated ones :)

      • mbay an hour ago

        I worked on stream processing at my previous gig but don't have a need for it currently. Just curious.