Can we know whether a profiler is accurate?

(stefan-marr.de)

62 points | by todsacerdoti 13 hours ago ago

15 comments

  • hinkley 7 hours ago

    Let me save you fifteen minutes, or the rest of your life: They aren’t.

    Profilers alter the behavior of the system. Nothing has high enough clock resolution or fidelity to make them accurate. Intel tried to solve this by building profiling into the processor, and that only helped slightly.

    Big swaths of my career, and the resulting wins, started with the question,

    “What if the profiler is wrong?”

    One of the first things I noticed is that no profilers make a big deal out of invocation count, which is a huge source of information for continuing past tall tent poles or hotspots into productive improvement. I have seen one exception to this, but that tool became defunct sometime around 2005 and nobody has copied them since.

    Because of cpu caches and branch prediction and amortized activities in languages or libraries (memory defrag, GC, flushing), many things get tagged by the profiler as expensive that are being scapegoated because they get stuck paying someone else’s bill. They exist at the threshold where actions can no longer be deferred and have to be paid for now.

    So what you’re really looking for in the tools is everything that looks weird. And that often involves ignoring the fancy visualization and staring at the numbers. Which are wrong. “Reading the tea leaves” as they say.

    • SerCe 6 hours ago

      > Let me save you fifteen minutes, or the rest of your life: They aren’t.

      Knowing that all profilers aren't perfectly accurate isn't a very useful piece of information. However, knowing which types of profilers are inaccurate and in which cases is indeed very useful information, and this is exactly what this article is about. Well worth 15 minutes.

      > And that often involves ignoring the fancy visualization and staring at the numbers.

      Visualisations are incredibly important. I've debugged a large number [1] of performance issues and production incidents highlighted by the async profiler producing Brendan Gregg's flame graphs [2]. Sure, things could be presented as numbers, but what I really care about most of the time when I take a CPU profile from a production instance is – what part of the system was taking most of the CPU cycles.

      [1]: https://x.com/SerCeMan/status/1305783089608548354

      [2]: https://www.brendangregg.com/flamegraphs.html

    • pjc50 4 hours ago

      > no profilers make a big deal out of invocation count

      This is where we get into sampling vs. tracing profilers. Tracing is even more disruptive to the runtime, but gives you more useful information. It can point you at places where your O-notation is not what you expected it to be. This is a common cause of things which grid to a halt after great performance on small examples.

      It gets even worse in distributed systems, which is partly why microservice-oriented things "scale" at the expense of a lot more hardware than you'd expect.

      It's definitely a specialized discipline, whole-system optimization, and I wish I got to do it more often.

    • geokon 5 hours ago

      im pretty sure performance counters count accurately. theyre a bit finnicky to use but they dont alter cpu execution.

      last i had to deal with it.. which was eons ago.. Higher end CPUs like Xeons had more counters and more useful ones

      im sure there are plenty of situations where theyre insufficient, but its absurd to paint the situation as completely always hopeless

      • mrjay42 4 hours ago

        Last time I checked, Intel's MSRs (https://en.wikipedia.org/wiki/Model-specific_register) allow Intel PCM (https://github.com/intel/pcm) to work, are indeed used to profile, or "measure performance" (sorry if my vocabulary is not the most accurate). Last time I checked the code of Intel PCM, it still relies on hardcoded values for each CPU which are as close as possible to reality but are still an estimation.

        It doesn't mean that you get wrong measurements, it means there's a level of inaccuracy that has to be accepted.

        BTW, I am aware that Intel PCM is not a profiler, and more of a measurement tool, however you CAN you use it to 'profile' your program and see how it behaves in terms of computing and memory utilization (with deep analysis of cache behavior (cache hit, cache miss, etc.))

    • jstanley 2 hours ago

      If you think it's difficult to optimise performance with the numbers the profiler gives you, try doing it without them!

    • whatever1 4 hours ago

      Heisenberg principle but for programming

  • comex 12 hours ago

    Another option is to use the "processor trace" functionality available in Intel and Apple CPUs. This can give you a history of every single instruction executed and timing information every few instructions, with very little observer effect. Probably way more accurate than the approach in the paper, though you need the right kind of CPU and you have to deal with a huge amount of data being collected.

    • hinkley 7 hours ago

      Those definitely make them less wrong, but still leave you hanging because most functions have side effects and those are exceedingly difficult to trace.

      The function that triggers GC is typically not the function that made the mess.

      The function that stalls on L2 cache misses often did not cause the miss.

      Just using the profiler can easily leave 2-3x performance on the table, and in some cases 4x. And in a world where autoscaling exists and computers run in batteries that’s a substantial delta.

      And the fact is that with few exceptions nobody after 2008 really knows me as the optimization guy, because I don’t glorify it. I’m the super clean code guy. If you want fast gibberish, one of those guys can come after me for another 2x if you or I don’t shoo them away. Now you’re creeping into order of magnitude territory. And all after the profiler stopped feeding you easy answers.

    • scottgg 10 hours ago

      Do you have a source for “with very little observer effect”? I don’t know better, it just seems like a big assumption the CPU can emit all this extra stuff without behaving differently.

      • PennRobotics 2 hours ago

        Trace data are sent through a large/fast port (PCIe or 60-pin connector) and captured using fast dedicated hardware at something like 10 GB per second. The trace data are usually compressed and often only need to indicate whether a branch is taken or not taken (TNT packets from x86, Arm has ETM but similar enough trace path) with a little bit of timing, exception/interrupt, and address overhead. The bottleneck is streaming and storing trace data from a hardware debugger (since its internal buffer is usually under half a second at max throughput) although you can further filter by application on Intel processors via CR3 matching. (Regarding the last five years of Apple: I'm not sure you'll find any info on Apple's debuggers and modifications to the Arm architecture. Ever.)

        If you encounter a slowdown using RTIT or IPT (the old and new names for hardware trace) it's usually a single-digit percentage. (The sources here are Intel's vague documentation claims plus anecdotes; Magic Trace, Hagen Paul Pfeifer, Andi Kleen, Prelude Research.)

        Decoding happens later and is significantly slower, and this is where the article's focus, JIT compilation, might be problematic using hardware trace (as instruction data might change/disappear, plus mapping machine code output to each Java instruction can be tricky).

      • achierius 10 hours ago

        It's not an assumption, this is based on claims made by CPU manufactures. It's possible to get it down to within 1-2% overhead.

        Intuitively this works because the hardware can just spend some extra area to stream the info off on the side of the datapath -- it doesn't need to be in the critical path.

  • satisfice 11 hours ago

    In the early nineties I was test manager of the Borland Profiler. I didn’t supervise the tester of the profiler closely enough, and discovered only when customers complained that the profiler results were off by a quarter second on every single measurement reported.

    It turns out that the tester had not been looking closely at the output, other than to verify that output consisted of numbers. He didn’t have any ideas about how to test it, so he opted for mere aesthetics.

    This is one of many incidents that convinced me to look closely and carefully at the work of testers I depend upon. Testing is so easy to fake.

    • dboreham 11 hours ago

      In my experience a very large proportion of all automated testing is like this if you go poking into what it does.

      • satisfice 9 hours ago

        My experience is the same.