The article posts a table of latency distributions, but the latencies are simulated based on the assumption that latencies are lognormal. I would be interested to read the article comparing the simulation to actual measurements.
The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.
The very first sentence of this article contains an error:
> Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.
While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.
author here - took that quote from this[1] blog post by an AWS VP/distinguished engineer, the use of "public service" might have some loosely defined meaning in this context.
What I’ve always been curious about is if you can help the S3 query optimizer* in any way to use specialized optimizations. For example if you indicate the data is immutable[1] does the lack of a write path allow further optimization under the hood? Replicas could in theory serve requests without coordination.
*I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.
Yeah, engineering high scale distributed data systems on top the cloud providers a very weird thing at times.
But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.
So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.
You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.
If cross AZ bandwidth was more reasonably priced it would enable a lot of design options like running something like MinIO on nothing but directly connected NVMe Instance store volumes.
> Roughly speaking, the latency of systems like object storage tend to have a lognormal distribution
I would dig into that. This might (or might not) be something you can do something about more directly.
That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.
To dig in, I might look at what's going on at the packet/ack level.
I don't know what you mean by the word "organic", but I think lognormal distributions are very common and intuitive: whenever the true generative mechanism is “lots of tiny, independent percentage effects piling up,” you’ll see a log‑normal pattern.
You can think of a network generally as a bunch of uniform nodes with uniform connections each with a random chance of failure, as a useful first approximation.
But that’s not what they really are.
If you’re optimizing or troubleshooting it’s usually better to look at what’s actually happening. Certainly before implementing a fix. You really want to understand what you’re fixing, or you’re kind of doing a rain dance.
The hedging strategies all seem to assume that latency for an object is an independent variable.
However, I would assume dependency?
Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?
It’s not addressed directly but I do think the article implies you hope your request latencies are not correlated. It provides a strategy for helping to achieve that
> Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.
The article posts a table of latency distributions, but the latencies are simulated based on the assumption that latencies are lognormal. I would be interested to read the article comparing the simulation to actual measurements.
The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.
I think the distribution he uses is pretty close to the paper he links "Exploiting Cloud Object Storage for High-Performance Analytics" https://www.durner.dev/app/media/papers/anyblob-vldb23.pdf
The very first sentence of this article contains an error:
> Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.
While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.
author here - took that quote from this[1] blog post by an AWS VP/distinguished engineer, the use of "public service" might have some loosely defined meaning in this context.
[1] https://www.allthingsdistributed.com/2025/03/in-s3-simplicit...
Interesting source - looks like it means “GA” service, rather than “public” per se. The SQS beta was also available to the public.
I thought S3 was first as well.
This is the source Wikipedia uses: https://web.archive.org/web/20041217191947/http://aws.typepa...
What I’ve always been curious about is if you can help the S3 query optimizer* in any way to use specialized optimizations. For example if you indicate the data is immutable[1] does the lack of a write path allow further optimization under the hood? Replicas could in theory serve requests without coordination.
*I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.
[1] https://aws.amazon.com/blogs/storage/protecting-data-with-am...
It is kinda of crazy how much work is done to mitigate the very high latency of S3 when we have NVMe SSDs with access latency of microseconds.
Yeah, engineering high scale distributed data systems on top the cloud providers a very weird thing at times.
But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.
So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.
You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.
But it is also kind of fun
If cross AZ bandwidth was more reasonably priced it would enable a lot of design options like running something like MinIO on nothing but directly connected NVMe Instance store volumes.
> Roughly speaking, the latency of systems like object storage tend to have a lognormal distribution
I would dig into that. This might (or might not) be something you can do something about more directly.
That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.
To dig in, I might look at what's going on at the packet/ack level.
I don't know what you mean by the word "organic", but I think lognormal distributions are very common and intuitive: whenever the true generative mechanism is “lots of tiny, independent percentage effects piling up,” you’ll see a log‑normal pattern.
You can think of a network generally as a bunch of uniform nodes with uniform connections each with a random chance of failure, as a useful first approximation.
But that’s not what they really are.
If you’re optimizing or troubleshooting it’s usually better to look at what’s actually happening. Certainly before implementing a fix. You really want to understand what you’re fixing, or you’re kind of doing a rain dance.
How do you do that for an abstract service like S3? I see how you could do that for your own machines.
The hedging strategies all seem to assume that latency for an object is an independent variable.
However, I would assume dependency?
Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?
(I am genuinly curious of this is the case)
S3 scale is quite massive with each object spread across a large number of nodes via erasure encoding.
So while you could get unlucky and routed to same bad node / bad rack, the reality is that it is quite unlikely.
And while the testing here is simulated, this is a technique that is used with success.
Source: working on these sort of systems
It’s not addressed directly but I do think the article implies you hope your request latencies are not correlated. It provides a strategy for helping to achieve that
> Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.
S3 is a bad choice if you need low latency to begin with.
They have both ssd and platter based storage now. So that's not a true statement anymore.
The problem of s3 latency is never about hdd or ssd to begin with.
This a big problem of so called modern “data pipeline”; public cloud providers will anything and a lot of people will believe it.
No, sorry.
Network-based storage is a bad choice if you need low latency, period. You’re not going to beat data locality.
Lots of areas left for exploration.