Achieveing lower latencies with S3 object storage

(spiraldb.com)

30 points | by znpy 2 days ago ago

23 comments

  • anorwell 2 days ago

    The article posts a table of latency distributions, but the latencies are simulated based on the assumption that latencies are lognormal. I would be interested to read the article comparing the simulation to actual measurements.

    The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.

  • jen20 2 days ago

    The very first sentence of this article contains an error:

    > Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.

    While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.

  • n_u 2 days ago

    What I’ve always been curious about is if you can help the S3 query optimizer* in any way to use specialized optimizations. For example if you indicate the data is immutable[1] does the lack of a write path allow further optimization under the hood? Replicas could in theory serve requests without coordination.

    *I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.

    [1] https://aws.amazon.com/blogs/storage/protecting-data-with-am...

  • UltraSane 2 days ago

    It is kinda of crazy how much work is done to mitigate the very high latency of S3 when we have NVMe SSDs with access latency of microseconds.

    • addisonj 2 days ago

      Yeah, engineering high scale distributed data systems on top the cloud providers a very weird thing at times.

      But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.

      So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.

      You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.

      But it is also kind of fun

      • UltraSane 2 days ago

        If cross AZ bandwidth was more reasonably priced it would enable a lot of design options like running something like MinIO on nothing but directly connected NVMe Instance store volumes.

  • jmull 2 days ago

    > Roughly speaking, the latency of systems like object storage tend to have a lognormal distribution

    I would dig into that. This might (or might not) be something you can do something about more directly.

    That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.

    To dig in, I might look at what's going on at the packet/ack level.

    • nkmnz 2 days ago

      I don't know what you mean by the word "organic", but I think lognormal distributions are very common and intuitive: whenever the true generative mechanism is “lots of tiny, independent percentage effects piling up,” you’ll see a log‑normal pattern.

      • jmull 2 days ago

        You can think of a network generally as a bunch of uniform nodes with uniform connections each with a random chance of failure, as a useful first approximation.

        But that’s not what they really are.

        If you’re optimizing or troubleshooting it’s usually better to look at what’s actually happening. Certainly before implementing a fix. You really want to understand what you’re fixing, or you’re kind of doing a rain dance.

    • pyfon 2 days ago

      How do you do that for an abstract service like S3? I see how you could do that for your own machines.

  • tossandthrow 2 days ago

    The hedging strategies all seem to assume that latency for an object is an independent variable.

    However, I would assume dependency?

    Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?

    (I am genuinly curious of this is the case)

    • addisonj 2 days ago

      S3 scale is quite massive with each object spread across a large number of nodes via erasure encoding.

      So while you could get unlucky and routed to same bad node / bad rack, the reality is that it is quite unlikely.

      And while the testing here is simulated, this is a technique that is used with success.

      Source: working on these sort of systems

    • n_u 2 days ago

      It’s not addressed directly but I do think the article implies you hope your request latencies are not correlated. It provides a strategy for helping to achieve that

      > Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.

  • up2isomorphism 2 days ago

    S3 is a bad choice if you need low latency to begin with.

    • mannyv 2 days ago

      They have both ssd and platter based storage now. So that's not a true statement anymore.

      • up2isomorphism 2 days ago

        The problem of s3 latency is never about hdd or ssd to begin with.

        This a big problem of so called modern “data pipeline”; public cloud providers will anything and a lot of people will believe it.

    • sgarland 2 days ago

      Network-based storage is a bad choice if you need low latency, period. You’re not going to beat data locality.

  • jmpman 2 days ago

    Lots of areas left for exploration.