15 comments

  • 6keZbCECT2uB an hour ago

    I like the project: taking it from refresh-induced tail latency to racing threads assigned to addresses that are de-correlated by memory channel. Connecting this to a lookup table which is broadcasted across memory channels to let the lookup paths race makes for a nice narrative, but framing this as reducing tail latency confused me because I was expecting this to do a join where a single reader gets the faster of the two racers.

    From a narrative standpoint, I agree it makes more sense to focus on a duplicated lookup table and fastest wins, however, from an engineering standpoint, framing it in terms of channel de-correlated reads has more possibilities. For example, if you need to evaluate multiple parallel ML models to get a result then by intentionally partitioning your models by channel you could ensure that a model does reads on only fast data or only slow data. ML models might not be that interesting since they are good candidates for being resident in L3.

  • ysleepy 2 hours ago

    Loved the details about how memory access actually maps addresses to channels, ranks, blocks and whatever, this is rarely discussed.

    Not sure how this works for larger data structures, but my first thought was that this should be implemented as some microcode or instruction.

    Most computation is not thaat jitter sensitive, perception is not really in the nano to microsecond scale, but maybe a cool gadget for like dtrace or interrupt handers etc.

  • shaicoleman 4 hours ago
  • TeapotNotKettle an hour ago

    Very interesting work.

    But practically speaking, in a real application - isn’t any performance benefit going to be lost by the reduced cache hit rate caused by having a larger working set? Or are the reads of all-but-one of the replicas non-cached?

    Apologies if I am missing something.

    • 6keZbCECT2uB an hour ago

      Once your cache hit ratios for some data structure go < .1%, I'd rather have 75% less tail latency even if it reduces cache hit rate further.

  • inetknght an hour ago

    @lauriewired, I think the most interesting thing that I learned from this is that memory refresh causes readwrite stalls. For some reason I thought it was completely asynchronous.

    But otherwise, nice work tying all the concepts together. You might want to get some better model trains though.

  • addaon 2 hours ago

    This addresses the “short long tail” (known bounded variance due to the multiple physical operations underlying a single logical memory op), but for hard real time applications the “long long tail” of correctable-ECC-error—and-scrub may be the critical case.

  • jagged-chisel 2 hours ago

    My understanding is that this is making a trade off of using more space to get shorter access times. Do I have that right?

    OT: Tail Slayer. Not Tails Layer. My brain took longer to parse that than I’d have wanted.

    • thfuran an hour ago

      Yeah, it improves mean (but not median) access time by using more memory.

  • jeffbee 3 hours ago

    This readme, this header do not seem to discuss in any way the tradeoff, which is that you're paying by the same factor with median latency to buy lower tail latency. Nobody thinks of a load as taking 800 cycles but that is the baseline load latency here.

    Also, having sacrificed my own mental health to watch the disgustingly self-promoting hour-long video that announces this small git commit, I can confidently say that "Graviton doesn't have any performance counters" is one of the wrongest things I've heard in a long time.

    Overall, I give it an F.

    Anyway if you want to hide memory refresh latency, IBM zEnterprise is your platform. It completely hides refresh latency by steering loads to the non-refreshing bank, and it only costs half the space, not up to 92% of your space like this technique.

    • lauriewired 2 hours ago

      Nope, there isn’t a tradeoff; median latency isn’t affected. I don’t think you understand the code. The p50 is identical between a single read and the hedged strategy.

      The clflush is there because the technique targets data that will miss the cache anyway. If your working set fits in L1, you don’t need this.

      Also, AWS Graviton instances absolutely do not expose per-channel memory controller counter PMUs. That’s why you have to use timing-based channel discovery.

      The IBM z-system is neat! But my technique will work on commodity hardware in userspace, and you can easily only sacrifice half the space if you accept 2-way instead of 8+ way hedging. It’s entirely up to you how many channel copies you want to use.

      Your reply was quite rude, but I hope this is informative.

      • hedgehog 2 hours ago

        I was just trying to reconcile his reply with the charts. Have you tested how this scales down for smaller systems, as one might find in on the management side of a network switch?

      • jeffbee 2 hours ago

        I won't be tone-policed by a person who is clearly trying to mislead and confuse people. I leave it to the other HNers to read your benchmark code and see for themselves that it is an exercise in absurdity, a work-around for its own library that doesn't measure anything other than with N threads, because of the laws of probability, this technique of reading timestamps as fast as possible and cramming them into a vector yields lower measurements with higher N.

        • zidders 38 minutes ago

          You were rude. Be nice or don't post.

    • PunchyHamster 2 hours ago

      The video was about how rowhammer works, the lib was byproduct.