Replacing a cache service with a database

(avi.im)

63 points | by avinassh 15 hours ago ago

60 comments

  • miggy 12 hours ago

    We had a critical service that often got overwhelmed, not by one client app but by different apps over time. One week it was app A, the next week app B, each with its own buggy code suddenly spamming the service.

    The quick fix suggested was caching, since a lot of requests were for the same query. But after debating, we went with rate limiting instead. Our reasoning: caching would just hide the bad behavior and keep the broken clients alive, only for them to cause failures in other downstream systems later. By rate limiting, we stopped abusive patterns across all apps and forced bugs to surface. In fact, we discovered multiple issues in different apps this way.

    Takeaway: caching is good, but it is not a replacement for fixing buggy code or misuse. Sometimes the better fix is to protect the service and let the bugs show up where they belong.

    • Alex_L_Wood 8 hours ago

      It's funny how I encountered a problem which went exactly the opposite way! We initially introduced a rate limiter that was adequate for the time, but with the product scaling up it stopped being adequate, and any failures with 429 were either ignored, or closed as client bugs. Only after some time we realized that the rate of requests scaled up approximately with the rate of product growth, and a quick fix was to simply remove the limiter, but after a couple of times when DB decided to take a nap after being overwhelmed, we added a caching layer.

      Just goes to show that there is no silver bullet - context, experience and good amount of gut feeling is paramount.

      • spyspy 6 hours ago

        Something that was drilled into me early in my career was that you cannot expect your cache to be up 100% of the time. The logical extension of that is your main DB needs to be able to handle 100% of your traffic at a moment’s notice. Not only has this kind of thinking saved my ass on several occasions, but it’s also actually kept my code much cleaner. I don’t want to say rate limiters and circuit breakers are the mark of bad engineering, butttt they’re usually just good engineering deferred.

    • andersmurphy 11 hours ago

      I guess CPUs are pretty buggy with all their caches. If only the hardware people could fix their buggy systems.

      In all seriousness sometimes a cache is what you need. Inline caching is a classic example.

      • WillDaSilva 8 hours ago

        There are times when a cache is appropriate, but I often find that it's more appropriate for the cache to be on the side of whoever is making all the requests. This isn't applicable when that is e.g. millions of different clients all making their own requests, but rather when we're talking about one internal service putting heavy load on another one.

        The team with the demanding service can add a cache that's appropriate for their needs, and will be motivated to do so in order to avoid hitting the rate limit (or reduce costs, which should be attributed to them).

        • spyspy 6 hours ago

          You cannot trust your clients. Period. It doesn’t matter if they’re internal or external. If you design (and test!) with this assumption in mind, you’ll never have a bad day. I’ve really never understood why teams and companies have taken this defensive stance that their service is being “abused” despite having nothing even resembling an SLA. It seemed pretty inexcusable to not have a horizontally scaling service back in 2010 when I first started interning at tech companies, and I’m really confused why this is still an issue today.

          • WillDaSilva 6 hours ago

            I fully agree. The rate limits are how you control the behaviour of the clients. My suggestion of leaving caching to the clients, which they may want to do in order to avoid hitting the rate limit.

          • pixl97 an hour ago

            >why teams and companies have taken this defensive stance that their service is being “abused” despite having nothing even resembling an SLA.

            I mean because bad code on a fast client system can cause a load higher than all other users put together. This is why half the internet is behind something like cloudflare these days. Limiting, blocking, and banning has to be baked in.

    • spyspy 7 hours ago

      You can never trust clients to behave. If your goal is to reduce infra cost, sure, rate limiting is an acceptable answer. But is it really that hard to throw on a cache and provision your service to be horizontally scalable?

      • miggy 5 hours ago

        Scaling matters, but why pay for abusive clients or bots? Adding a cache is easy; the hard part is invalidation, sync, and thundering herd. Use it if the product needs it, not as a band-aid.

  • chamomeal 8 hours ago

    Hey OP you may have seen this already, but in case you didn’t see my other comment, you should definitely check out this talk by Martin Kleppman.

    https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf

    It details Apache samza, which I didn’t totally grasp but it seems similar to what you’re talking about here.

    He talks about how if you could essentially use an event stream as your source of truth instead of a database, and you had a sufficiently powerful stream processor, you could define views on that data by consuming stream events.

    The end result is kind of like an auto-updating cache with no invalidation issues or race conditions. Need a new view on the data? Just define it and run the entire event stream through it. Once the stream is processed, that source of data is perpetually accurate and up-to-date.

    I’m not a database guy and most of this stuff is over my head, but I loved this talk and I think you should check it out! It’s the first thing I thought of when I read your post.

    • ajcp 6 hours ago

      Thank you for sharing. I thoroughly enjoyed the talk and am as well not a "database guy".

  • zeras 13 hours ago

    I think a fundamental mistake I see many developers make is they use caching trying to solve problems rather than improve efficiency.

    It's the equivalent of adding more RAM to fix poor memory management or adding more CPUs/servers to compensate for resource heavy and slow requests and complex queries.

    If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.

    • chamomeal 13 hours ago

      Idk I think caching is a crucial part of many well-designed systems. There’s a lot of very cache-able data out there. If invalidating events are well defined or the data is fine being stale (week/month level dashboards, for example), that’s a fantastic reason to use a cache. I’d much rather just stuff those values in a cache than figure out any other more complicated solution.

      I also just think it’s a necessary evil of big systems. Sometimes you need derived data. You can even think about databases as a kind of cache: the “real” data is the stream of every event that ever updated data in the database! (Yes this stretching the meaning of cache lol)

      However I agree that caching is often an easy bandaid for a bad architecture.

      This talk on Apache Samza completely changed how I think about caching and derived data in general: https://youtu.be/fU9hR3kiOK0?si=t9IhfPtCsSyszscf

      And this interview has some interesting insights on the problems that caching faces at super large scale systems (twitter specifically): https://softwareengineeringdaily.com/2023/01/12/caching-at-t...

      • hinkley 11 hours ago

        There are a lot of things necessary to be a successful human but doing them without doing the fundamentals just makes you a monkey in a suit.

        Caching belongs at the end of a long development arc. And it will be the end whether you want it too or not. Adding caching is the beginning of the end of large architectural improvements, because caches jam up the analysis and testing infrastructure. Everything about improving or adding features to the code slows down, eventually to a crawl.

      • zeras 9 hours ago

        Caching is definitely a useful and even a key component to producing efficent and high performance applications and services.

        I think the mistake is not using caching, but rather using it too soon in the development process.

        There are times when caching is a requirement because there is simply no way to provide efficient performance without it, but I think too many times developers jump straight to caching without thinking because it solves potential problems for them before they happen.

        The real problem comes later though at scale when caching can no long compensate for the development inefficiencies.

        Now the developers have to start rewriting core code which will take time to thoroughly complete and test and/or the engineers have to figure out a way to throw more resources at the problem.

    • hinkley 12 hours ago

      > It's the equivalent of adding more RAM to fix poor memory management

      No it’s ten times worse than that. Adding RAM doesn’t make the task of fixing the memory management problems intrinsically harder. It just makes the problem bigger when you do fix it.

      Adding caching to your app makes all of the tools used for detecting and categorizing performance issues much harder to use. We already have too many developers and “engineers” who balk at learning more than the basics of using these tools. Caching is like stirring up sediment in a submarine cave. Now only the most disciplined can still function and often just barely.

      When you don’t have caches, data has to flow along the call tree. So if you need a user’s data in three places, that data either flows to those three or you have to look it up three times, which can introduce concurrency issues if the user metadata changes in the middle of a request. But because it’s inefficient there is clear incentive to fix the data propagation issues. Fixing those issues will make testing easier because now the data is passed in instead of having to mock the lookup code.

      Then you introduce caching. Now the incentive is mostly gone, since you will only improve cold start performance. And now there is a perverse incentive to never propagate the data again. You start moving backward. Soon there are eight places in the code that use that data, because looking it up was “free” and they are all detached from each other. And now you can’t even turn off the cache, and cache traffic doesn’t tell you what your costs are.

      And because the lookup is “free” the user lookup code disappears from your perf data and flame graphs. Only a madman like me will still tackle such a mess, and even I have difficulty finding the motivation.

      For these reasons I say with great confidence and no small authority: adding caching to your app is the last major performance improvement most teams will ever see. So if you reach for it prematurely, you’re stuck with what you’ve got. Now a more astute competitor can deliver a faster, cheaper, or both product that eats your lunch and your team will swear there is nothing they can do about it because the app is already as fast as they can make it, and here are the statistics that “prove” it.

      Friends don’t let friends put caches on immature apps.

      • lemmsjid 11 hours ago

        I’d say a useful way of thinking about caching is through the lens of the CAP theorem. You are facing a situation where compute requirements exceed the bounds of a single process. There are a variety of things you can do here, all with consequences to the Consistency aspect of your data. Two strategies with consequences are caching and horizontal scaling. So look to vertical scaling or efficiencies in data modeling first.

        I like your comment btw. I’d add Observability to CAP to incorporate what you’re saying.

    • mannyv 5 hours ago

      If your database is slow because it's on spinning disks, then a cache will speed up access.

      That's not a fundamental mistake, and there's very little you can do about that from an efficiency point of view.

      It's easy to forget that there was a world without SSDs, high speed pipes, etc - but it actually did exist. And that wasn't so long ago either.

      And of course sometimes putting data nearer to the user actually makes sense...like the Netflix movie boxes inside various POPs or CDNs. Bandwidth and latency are actual factors for many applications.

      That said, most applications probably should investigate adding indexes to their databases (or noSQL databases) instead of adding a cache layer.

    • cortesoft 11 hours ago

      > If your application requires caching to function effectively then you have a core issue that needs to be resolved, and if you don't address that issue then caching will become the problem eventually as your application grows more complex and active.

      I don’t think this is always true. Sometimes your app simply has data that takes a lot of computation to generate but doesn’t need to be generated often. Any way you solve this is going to be able to be described as a ‘cache’ even if you are just storing calculations in your main database. That doesn’t mean your application has a fundamental design flaw, it could mean your use case has a fundamental cache requirement.

    • jiggawatts 7 hours ago

      Not to mention latency! Caching does nothing to fix the latency of “misses”, which means any app that uses a caching layer to paper over a bad design will forever have a terrible P99 (or even P90) latency.

      “But, but, when I reload the page now it’s fast! I fixed it!”

  • simonw 14 hours ago

    A friend of mine once argued that adding a cache to a system is almost always an indication that you have an architectural problem further down the stack, and you should try to address that instead.

    The more software development experience I gain the more I agree with him on that!

    • hinkley 11 hours ago

      When all else fails, use caches. If all else hasn’t failed, it will once you use caches.

    • jedberg 11 hours ago

      If you have no cache, and your first thought is "this needs a cache", you're probably right. Chances are you need to optimize a query or storage pattern. But you're thinking like an engineer. It may be true that there is a "more correct" engineering solution, but adding a cache might be the most expedient solution.

      But after you'd done all the optimizations, there is still a use case for caches. The main one being that a cache holds a hot set of data. Databases are getting better at this, and with AI in everything, latency of queries is getting swamped by waiting for the LLM, but I still see caches being important for decades to come.

    • barrkel 13 hours ago

      Caches suck because invalidation needs to be sprinkled all over the place in what is often an abstraction-violating way.

      Then there's memoization, often a hack for an algorithm problem.

      I once "solved" a huge performance problem with a couple of caches. The stain of it lies on my conscience. It was actually admitting defeat in reorganizing the logic to eliminate the need for the cache. I know that the invalidation logic will have caused bugs for years. I'm sure an engineer will curse my name for as long as that code lives.

    • jmull 14 hours ago

      That's true in my experience.

      Caches have perfectly valid uses, but they are so often used in fundamentally poor ways, especially with databases.

    • DrBazza 13 hours ago

      I'd argue the database falls into that category.

      The two questions no one seems to ask are 'do I even need a database?', and 'where do I need my database?'

      There are alternate data storage 'patterns' that aren't databases. Though ultimately some sort of (Structure) query language gets invented to query them.

    • IgorPartola 13 hours ago

      If you think of it as a cache, yes. If you think of it as another data layer then no.

      For example, let’s say that every web page your CMS produces is created using a computationally expensive compilation. But the final product is more or less static and only gets updated every so often. You can basically have your compilation process pull the data from your source of truth such as your RSBMS but then store the final page (or large fragments of it) in something like MongoDB. In other words the cache replacement happens at generation time and not on demand. This means there is always a cached version available (though possibly slightly stale), and it is always served out of a very fast data store without expensive computation. I prefer this style of caching to on demand caching because it means you avoid cache invalidation issues AND the thundering herd problem.

      Of course this doesn’t work for every workflow but I can get you quite far. And yes this example can also be sort of solved with a static site generator but look beyond that at things like document fragments, etc. This works very well for dynamic content where the read to write ratio is high.

      • hinkley 11 hours ago

        No.

        It’s not a data layer, it’s global shared state. Global shared state always has consequences. Sometimes the consequences are worth the trouble. But it is trouble.

        If you think about Source of Truth, System of Record, cache is neither of those, and sits between them. There’s a lot of problems you can fix instead by improving the SoT or SoR situation in that area if the code.

        • IgorPartola 10 hours ago

          Hard disagree. Having used the architecture I described in large practical deployments it works way better than what you are making it out to be. But I don’t know the domain you work in and your constraints so it is possible that for you it would not work.

        • convolvatron 11 hours ago

          in particular, the database already _has_ a cache. usually its on the other side of the evaluation, at the block layer. which means that you have a pay a cost to get to it (the network protocol, and the evaluation).

          if you use materialized views, that surfaces exactly what you want in a cache, except here the views consistency with the underlying data is maintained. that's hugely important.

          that leaves us with the protocol. prepared statements might help. now we really should be about the same as the bump-on-the-wire cache. that doesn't get us the same performance is the in-process cache. but we didn't have to sacrifice any performance or add any additional operational overhead to get it.

      • lemmsjid 12 hours ago

        Quite agree, this is how I explain it to people. When you think of cache as another derived dataset then you start to realize that the issues caches bring to architectures are often the result of not having an agreement between the business and engineering on acceptable data consistency tolerances. For example, outside the world of caching, if you email users a report, and the data is embedded in the email, then you are accepting that the user will see a snapshot of data at a particular time. In many cases this is fine, even preferred. Sometimes not, and instead you link the user to a realtime dashboard instead.

        Pretty much every view the user sees of data should include an understanding as to how consistent that data is with the source of truth. Issues with caching (besides basic bugs) often come up when a performance issue comes up and people slap in a cache without renegotiating how the end user would expect the data to look relative to its upstream state.

        • hinkley 11 hours ago

          The cache is an incomplete dataset by definition. It’s not a data set, it’s a cache of a data set. You can never ensure you get a clean read of the system state from the cache because it’s never in sync and has gaps.

          • IgorPartola 10 hours ago

            What about materialized views? CPU cache? Only the Sith deal in absolutes :)

            • hinkley 9 hours ago

              CPU cache means that the same value read twice will return the same value. Some exceptions for NUMA, and mu[tiple threads. But two reads of a cache cache make no such guarantees.

              There is a vast number of undiagnosed race conditions in modern code cause by cache eviction in the middle of 'transactions' under high system load.

      • chamomeal 13 hours ago

        I already typed a longer comment elsewhere that I don’t feel like reiterating but I agree with you. Caching is a natural outcome of not having infinite time and memory for running programs. Sometimes it’s a bandaid over bad design, but often it’s a responsible decision to take load off of other important systems

      • cpursley 13 hours ago

        Lost me at DumpsterFireDB as cache. But if the goal is to create an even worse architecture thats even harder to maintain, go for it.

        • IgorPartola 12 hours ago

          Sorry you lack the imagination to substitute your preferred data store into what I wrote. Hope it gets easier.

          • cpursley 11 hours ago

            I'll never have enough imagination to believe mongo is a good solution. Postgres has jsonb, vector type; redis is a fine-enough cache. Why use a known junk "database" when there are superior solutions and truly open source?

            • IgorPartola 10 hours ago

              I didn’t say you have to use it. I said you could. Or any other data store that fits your use case. I used a MongoDB instance back in 2012 in a serious production environment in this exact way and it worked flawlessly while Postgres was what gave us trouble (it had a bunch of features added since that would have made those issues disappear but back then it didn’t have built in replication for example.)

              But again this is not an endorsement of MongoDB. I wouldn’t use it today but I did use it successfully and that company and tech stack sold for quite a bit of money and the software still runs, though I’m not sure on what stack. Again, if you are stuck on this one part of my comment… can’t help you.

    • jitl 14 hours ago

      Yeah my architecture problem is that Postgres RDS EBS storage is slow as dog. Sure our data won’t go poof if we lose an instance but it’s so slow.

      (It’s not really my architecture problem. My architecture problem is that we store pages as grains of sand in a db instead of in a bucket, and that we allow user defined schemas)

    • tootie 9 hours ago

      Most of the time I use caching it's to cut down on network round trips. If I'm fetching data on every end user request that only updates daily or weekly caching that's a no-brainer. Edge caching for content sites is also a no-brainer. Caching something computationally expensive may be fishy but also may be useful. Even if you are just papering over some inefficient process, that's not necessarily a sin. Sometimes you have to be pragmatic.

    • AtheistOfFail 13 hours ago

      I disagree. For large search pages where you're building payloads from multiple records that don't change often, it could be beneficial to use a cache. Your cache ends up helping the most common results to be fetched less often and return data faster.

  • tengbretson 14 hours ago

    Maybe these distinctions are useful to people in some situations, but to me this reads like wondering whether we can replace houses with buildings.

    • jayd16 13 hours ago

      More like they're stocking the fridge and wondering what living next to the market is like.

  • eatonphil 13 hours ago

    Many of these points are not compelling to me when 1) you can filter both rows and columns (in postgres logical replication anyway [0]) and 2) SQL views.

    [0] https://www.postgresql.org/docs/current/logical-replication-...

    • avinassh 12 hours ago

      Is it possible to create a filter that can work over a complex join operation?

      That's what IVM systems like Noria can do. With application + cache, the application stores the final result in the cache. So, with these new IVM systems, you get that precomputed data directly from the database.

      Views in Postgres are not materialized right? so every small delta would require refresh of entire view.

  • stevoski 7 hours ago

    Something missing from the article:

    For the type of cache usage described in the article, cache lookups are almost always O(1). This is because a cache value is retrieved for a specific key.

    Whereas db queries are often more complicated and therefore take longer. Yes, plenty of db queries are fetching a row by a key, and therefore fast. But many queries use a join and a somewhat complicated WHERE clause.

  • hoppp 15 hours ago

    The cache service is a database of sorts that usually stores key value pairs.

    The difference is in persistence and scaling and read/write permissions

    • barrkel 13 hours ago

      No, what makes a cache a cache is invalidation. A cache is stale data. It's a latent out of date calculation. It's misinformation that risks surviving until it lies to the user.

      • jedberg 10 hours ago

        This is true but a lot of the trouble in invalidation can be avoided by using smarter cache keys.

        For example, on reddit, fully rendered comments are cached, so that the renderer doesn't have to redo its work. But the cache key includes the date of the last edit on the comment, which is already known when requesting the value from the cache. In this way, you never have to invalidate that key, because editing the comment makes a new key. The old one will just get ejected eventually.

    • Supermancho 13 hours ago

      ie A cache is a database. The difference is features and usage.

      • hinkley 11 hours ago

        A database is usually a union of all of the questions that can be asked about a topic. A cache by definition is a subset of that. Subsets are not the sets. And if you treat them as if they are, which 90% of people do, you’re gonna have a bad time.

  • xixixao 14 hours ago

    This is a good deep dive into the complexity around caching: https://stack.convex.dev/caching-in

    Having caching by default (like in Convex) is a really neat simplification to app development.

  • jamesblonde 12 hours ago

    Some of these questions are informed by the Redis/DynamoDB or Postgres/MySQL world the author seems to inhabit.

    Why would you want to do this? "I don’t know of any database built to handle hundreds of thousands of read replicas constantly pulling data."

    If you want an open-source database with Redis latencies to handle millions of concurrent reads, you can use RonDB (disclaimer, I work on it).

    "Since I’m only interested in a subset of the data, setting up a full read replica feels like overkill. It would be great to have a read replica with just partial data. It would be great to have a read replica with just partial data."

    This is very unclear. Redis returns complete rows because it does not support pushdown projections or ordered indexes. RonDB supports these and distion aware partition-pruned index scans (start the transaction on the node/partition that contains the rows that are found with the index).

    Reference:

    https://www.rondb.com/post/the-process-to-reach-100m-key-loo...

  • mannyv 8 hours ago

    Instead of redis etc you could get away with static files served via a cdn.

    Again, you should test. But the main reason imo for redis is connections and speed, not just speed.

  • gethly 12 hours ago

    Event-sourcing is a powerful tool that helps with exactly this. Why spin up a cache server when you can spin up another read DB instance for the same price and get unlimited capabilities...

  • jayd16 13 hours ago

    So I guess this guy wants Firestore (or the OSS equivalent)?

  • cbsmith 15 hours ago

    So close to getting push driven architecture...

  • phoronixrly 14 hours ago

    Rails also has a take on this https://github.com/rails/solid_cache