66 comments

  • brunohaid 20 hours ago

    Bit thin on details and not looking like they’ll open source it, but if someone clicked the post because they’re looking for their “replace ES” thing:

    Both https://typesense.org/ and https://duckdb.org/ (with their spatial plugin) are excellent geo performance wise, the latter now seems really production ready, especially when the data doesn’t change that often. Both fully open source including clustered/sharded setups.

    No affiliation at all, just really happy camper.

    • j_kao 20 hours ago

      These are great projects, we use DuckDB to inspect our data lake and for quick munging.

      We will have some more blog posts in the future describing different parts of the system in more detail. We were worried too much density in a single post would make it hard to read.

    • atombender 13 hours ago

      DuckDB does not have any kind of sharding or clustering? It doesn't even have a server (unless you count the HTTP Server Extension)?

      • brunohaid 7 minutes ago

        Good point and was mostly re Typesense (can't edit the comment anymore).

        But given that duckdb handles "take this n GB parquet file/shard from a random location, load it into memory and be ready in < 1 sec" very well I'd argue it's quite easy to build something that scales horizontally.

        We use it for both the importer pipeline that processes the 200GB compressed GBIF.org parquet dataset and queries like https://www.meso.cloud/plants/pinophyta/cupressales/pinopsid... and just the sheer amount of functions beyond simple stuff like "how close is a/b to x/y" or is "n within area x" is just a joy to work with.

    • sureglymop 20 hours ago

      These are great. I am eternally grateful that projects like this are open source, I do however find it hard to integrate them into your own projects.

      A while ago I tried to create something that has duckdb + its spatial and SQLite extensions statically linked and compiled in. I realized I was a bit in over my head when my build failed because both of them required SQLite symbols but from different versions.

    • jjordan 20 hours ago

      Typesense is an absolute beast, and it has a pretty great dev experience to boot.

    • mcdonje 16 hours ago

      Not sure what they'll opensource. The rust code? They're calling it a DB, but they described an entire stack.

    • ericcholis 16 hours ago

      Typsense as a product has been great (hosted cluster). Customer support has been awesome as well.

  • pianoben 5 hours ago

    Lol I "love" that the first benefit this company lists in their jobs page is "In-Office Culture". Do people actually believe that having to commute is a benefit?

    • nickm12 3 hours ago

      You can't reduce the in-office or remote experience purely to commuting. It's just one aspect about how and where you work and work life balance in general.

      But since you asked, yes, I actually enjoy commuting when it is less than 30 minutes each way and especially when it involves physical activities. My best commutes have been walking and biking commutes of around 20-25 minutes each way. They give me exercise, a chance to clear my head, and provide "space" between work and home.

      During 2020, I worked from home the entire time and eventually I found it just mentally wasn't good for me to work and live in the same space. I couldn't go into the office, so I started taking hour long walks at the end of every day to reset. It helped a lot.

      That said, I've also done commutes of up to an hour each way by crowed train and highway driving and those are...not good.

      • LtWorf 3 hours ago

        "look at me! I am rich and can afford a centrally located home!"

        Well good for you!

    • 01HNNWZ0MV43FF 3 hours ago

      In-office culture would be dope if there were actual benefits to an office like maybe

      Learning from smart people, making friends, free food and drinks, a DDR machine

      My last office job had none of that. Instead it was just sort of like a depressing scaled up version of my home office

      • LtWorf 3 hours ago

        My office has some nice perks!

        1. It's extremely cold and dark! I must wear extra clothes when going inside and I get depressed at wasting a day of nice weather in what looks like a WW1 bunker.

        2. Terrible accessibility for disabled people! (such as myself)

        3. Filthy toilets!

        4. Internet is slower than at home!

        5. Half the team lives somewhere else so all meetings are on teams anyway!

        6. They couldn't afford a decent headset so I get pain in my head after 5 minutes, but I don't have a laptop so I can't move to a meeting room.

        The HR really can't understand why after all these great perks I insist on wanting to work from home. I am such an illogical person!

        • throw738338 an hour ago

          Friend works at office that allows dogs. Her workplace is one big dog toilet! She is expected to clean it (she is not toilet cleaner). She get sexually assaulted, when her boss shoved his dog into her crotch!

          There were some hospitalisations from work related injuries... Regular bullying, threats of violence....

          Lovely office culture!

  • maelito a day ago

    I wonder if this could help Photon, the open source ElasticSearch/OpenSearch search engine for OSM data.

    It's a mini-revolution in the OSM world, where most apps have a bad search experience where typos aren't handled.

    https://github.com/komoot/photon

  • pm90 21 hours ago

    Slightly meta, but I find its a good sign that we're back to designing and blogging about in-house data storage systems/ Query engines again. There was an explosion of these in the 2010's which seemed to slow down/refocus on AI recently.

    • 0xbadcafebee 11 hours ago

      It slowed down not because of AI, but because it turned out it was mostly pointless. Highly specialized stacks that could usually be matched in performance by tweaking an existing system or scaling a different way.

      In-house storage/query systems that are not a product being sold by itself are NIH syndrome by a company with too much engineering resources.

    • 8n4vidtmkvmk 21 hours ago

      Is it good? What's left to innovate on in this space? I don't really want experimental data stores. Give me something rock solid.

      • cfors 20 hours ago

        I don't disagree that rock solid is a good choice, but there is a ton of innovation necessary for data stores.

        Especially in the context of embedding search, which this article is also trying to do. We need database that can efficiently store/query high-dimensional embeddings, and handle the nuance of real-world applications as well such as filtered-ANN. There is a ton of innovation in this space and it's crucial to powering the next generation architectures of just about every company out there. At this point, data-stores are becoming a bottleneck for serving embedding search and I cannot understate that advancements in this are extremely important for enabling these solutions. This is why there is an explosion of vector-databases right now.

        This article is a great example of where the actual data-providers are not providing the solutions companies need right now, and there is so much room for improvement in this space.

        • whakim 5 hours ago

          I do not think data stores are a bottleneck for serving embedding search. I think the raft of new-fangled vector db services (or pgvector or whatever) can be a bottleneck because they are mostly optimized around the long tail of pretty small data. Real internet-scale search systems like ES or Vespa won’t struggle with serving embedding search assuming you have the necessary scale and time/money to invest in them.

      • weego 20 hours ago

        Agreed. The only caveat to that being a global rule is: 'At scale in a particular niche, even an excellent generalist platform might not be good enough'

        But then the follow on question begs: "Am I really suffering the same problems that a niche already-scaled business is suffering"

        A question that is relevant to all decision making. I'm looking at you, people who use the entire react ecosystem to deploy a blog page.

  • softwaredoug 21 hours ago

    It’s interesting as someone in the search space how many companies are aiming to “replace Elasticsearch”

    • j_kao 20 hours ago

      Author here! We were really motivated to turn a "distributed system" problem into a "monolithic system" from an operations perspective and felt this was achievable with current hardware, which is why we went with in-process, embedded storage systems like RocksDB and Tantivy.

      Memory-mapping lets us get pretty far, even with global coverage. We are always able to add more RAM, especially since we're running in the cloud.

      Backfills and data updates are also trivial and can be performed in an "immutable" way without having to reason about what's currently in ES/Mongo, we just re-index everything with the same binary in a separate node and ship the final assets to S3.

    • mikeocool 21 hours ago

      In my experience, the care and feeding that goes into an Elastic Search cluster feels like it's often substantially higher than that involved in the primary data store, which has always struck me as a little odd (particularly in cases where the primary data store is an RDBMS).

      I'd be very happy to use simpler more bulletproof solutions with a subset of ES's features for different use cases.

      • dewey 20 hours ago

        To add another data point: After working with ES for the past 10 years in production I have to say that ES is never giving us any headaches. We've had issues with ScyllaDB, Redis etc. but ES is just chugging along and just works.

        The one issue I remember is: On ES 5 we once had an issue early on where it regularly went down, turns out that some _very long_ input was being passed into the search by some scraper and killed the cluster.

        • heipei 17 hours ago

          I agree, and I don't get where the claims that ES is hard to operate originate from. Yeah, if you allow arbitrary aggregations that exceed the heap space, or if you allow expensive queries that effectively iterate over everything you're gonna have a bad time. But apart from those, as long as you understand your data model, your searches and how data is indexed, ES is absolutely rock-solid, scales and performs like a beast. We run a 35-node cluster with ~ 240TB of disk, 4.5TB of RAM, and about 100TB of documents and are able to serve hundreds of queries. The whole thing does not require any maintenance apart from replacing nodes that failed from unrelated causes (hardware, hosting). Version upgrades are smooth as well.

          The only bigger issue we had was when we initially added 10 nodes to double the initial capacity of the cluster. Performance tanked as a result, and it took us about half a day until we finally figured out that the new nodes were using dmraid (Linux RAID0) and as a result the block devices had a really high default read-ahead value (8192) compared to the existing nodes, which resulted in heavy read amplification. The ES manual specifically documents this, but since we hadn't run into this issue ourselves it took us a while to realise what was at fault.

        • lisbbb 12 hours ago

          The thing I like about ES: When the business comes around and adds new requirements out of nowhere, the answer is always: "Yup, we can do it!" Unlike other tools such as Cassandra that force a data design from the get go and make it expensive to change later on.

        • itpragmatik 20 hours ago

          how many clusters, how many indexes and how many documents per index? do you use self hosted es or aws managed opensearch?

          • dewey 20 hours ago

            12 nodes, 200 million documents / node, very high number of searches and indexing operations. Self-hosted ES on GCP managed Kubernetes.

          • binarymax 19 hours ago

            Lots of other options here if you don't like managing. You can use Elastic cloud, Bonsai.io, and others

            • lisbbb 12 hours ago

              A lot of places can't put their data just anywhere.

              • chatmasta 7 hours ago

                And they can pay the vendors for "bring your own cloud" or similar. If data sovereignty is important to them, then they can probably afford it. And if cost is an issue, then they wouldn't be looking at hosted solutions in the first place.

              • dewey 4 hours ago

                They manage it in your GCP project, so you can also make use of your commitments etc.

        • everfrustrated 20 hours ago

          How big is the team that looks after it?

          • dewey 20 hours ago

            Nobody is actively looking after it. Good alerting + monitoring and if there's an alert like a node going down because of some Kubernetes node shuffling or a version upgrade that has to be performed one of our few infra people will do that.

            It's really not something that needs much attention in my experience.

      • lisbbb 12 hours ago

        I'm interested in this detail because a few years back I was involved in a major big data project at a health insurance company and I cooked up a solution that involved ElasticSearch that was workable only to be shot down--it was political, but they had to do it with Kafka, full stop. The problem was, at that time, Kafka wasn't very mature and it wasn't a good solution for the problem, regardless. So our ES version got shelved.

      • nchmy 15 hours ago

        Check out manticoresearch - it's older than Lucene (which elasticsearch is built on), faster and simpler.

      • unsuitable 19 hours ago

        In my experience Elastic Search lacks fundamental tooling, like a CLI that copies data between nodes.

  • trimbo 21 hours ago

    This article is lacking detail. For example, how is the data sharded, how much time between indexing and serving, and how does it handle node failure, and other distributed systems questions? How does the latency compare? Etc. etc.

  • 0xbadcafebee 11 hours ago

    Rocks is a fork of Level, and Level is well known for data corruption and other bugs. They are both "run at production scale", but at least back when I worked on stuff that used Level, nobody talked publicly about all the toil spent on cleaning up and repairing Level to keep the services based on it running.

    Whenever you see an advertisement like this (these posts are ads for the companies publishing them), they will not be telling you the full truth of their new stack, like the downsides or how serious they can be (if they've even discovered them yet). It's the same for tech talks by people from "big name companies". They are selling you a narrative.

    • Jweb_Guru 11 hours ago

      RocksDB diverged from LevelDB a long time ago at this point and has had extensive work done on it by both industry and academia. It's not a toy database like LevelDB was. I can't speak to the problems they're supposedly hiding in their stack, but they are unlikely to come from RocksDB.

    • KAdot 10 hours ago

      This is not my experience. I've been running RocksDB for 4 years on thousands of machines, each storing terabytes of data, and I haven't seen a single correctness issue caused by RocksDB.

  • tracker1 18 hours ago

    Nice... it's cool to see how different companies are putting together best fit solutions. I'm also glad that they at least started out with off the shelf apps instead of jumping to something like a bespoke solution early on.

    Quickwit[1] looks interesting, found via Tantivity reference. Kind of like ES w/ Lucene.

    1. https://github.com/quickwit-oss/quickwit

  • feverzsj an hour ago

    Sounds like all they need is Postgres or just Sqlite.

  • 9cb14c1ec0 16 hours ago

    Clicked because of Elasticsearch, then wondered why I hadn't known of radar.com before. Just the autocomplete at a reasonable price that I need.

  • sophia01 a day ago

    They're not open sourcing it though?

    • j_kao 20 hours ago

      It's a bit difficult at the moment, given we have a lot of proprietary data at the moment and a lot of the logic follows it. I'm hoping we can get it to a state where it can be indexed and serving OSM data but that is going to take some time.

      That being said, we are currently working on getting our Google S2 Rust bindings open-sourced. This is a geo-hashing library that makes it very easy to write a reverse geocoder, even from a point-in-polygon or polygon-intersection perspective.

      • mips_avatar 5 hours ago

        Could you write a photon replacement if you had that? I would love to spend less per month running photon for my project.

    • pbowyer a day ago

      Doesn't sound like it, but it's a nice writeup of the tools they stitched together. For someone to copy and open source... hopefully :)

      • ellenhp 18 hours ago

        There are a few piece of this that rely on proprietary data, especially the FastText training step, so that's a dead-end unfortunately (would love to be proven wrong). I'd consider subbing in a small bert model with a classifier head for something FOSS without access to tons of user data, but then you lose the ability to serve high qps.

        • mips_avatar 5 hours ago

          I guess not having that would only breaking forward geocoding from an address?

      • cicloid 21 hours ago

        Tempted, specially for switching H3 instead of S2… I prototyped a similar solution a couple of weeks ago, so I could probably do a second pass

        • ellenhp 19 hours ago

          What's wrong with S2? H3 is so much more complex for very little gain from what I can tell.

  • darqis 12 hours ago

    Searching for HorizonDB I find a Python project on github.

    I'm guessing it's closed source *aas only?

  • nekitamo 6 hours ago

    I've used RocksDB a lot in the past and am very satisfied with it. It was helpful building a large write-heavy index where most of the data had to be compressed on disk.

    I'm wondering if anyone here has experience with LMDB and can comment on how they compare?

    https://www.symas.com/mdb

    I'm looking at it next for a project which has to cache and serve relatively small static data, and write and look up millions of individual points per minute.

  • jothirams 21 hours ago

    Is horizondb publicly available for us to try as well..

  • mexxixan 20 hours ago

    Would love to know how they scaled it. Also, what happens when you lose the machine and the local db? I imagine there are backups but they should have mentioned it. Even with backups how do you ensure zero data loss.

  • lisbbb 12 hours ago

    I can see ditching Mongo, but what's bad about ElasticSearch? Too expensive in some way?

    Isn't RocksDB just the db engine for Kafka?

  • tinyhouse 3 hours ago

    fastText? last time I checked it wasn't even maintained.

  • reactordev 21 hours ago

    I mean, anything could replace elasticsearch, but can it actually?

    It sounds like they had the wrong architecture to start with and they built a database to handle it. Kudos. Most would have just thrown cache at it or fine tuned a readonly postgis database for the geoip lookups.

    Without benchmarks it’s just bold claims we’ll have to ascertain.

  • dboreham 11 hours ago

    These are not the same kinds of things.

  • kosolam 20 hours ago

    Side note 1: ES can also be embedded in your app (on the JVM). Note 2: I actually used RocksDB to solve many use cases and it’s quite powerful and very performant. If anything from this post take this, it’s open source and a very solid building block. Note 3: I would like to test drive quickwit as an ES replacement. Haven’t got the time yet.

    • vips7L 10 hours ago

      I really enjoy embedding things in the vm. I run a discord bot with a few thousand users with embedded H2. Recently I’ve been looking at trying to embed keycloak (or something similar) for some other apps.

    • j_kao 19 hours ago

      1 - I think if we were sticking with the JVM, I do wonder if Lucene would be the right choice in that case

      2 - It's a great tool with a lot of tuneability and support!

      3 - We've been using it for K8s logs and OTEL (with Jaeger). Seems good so far, though I do wonder how the future of this will play out with the $DDOG acquisition.