22 comments

  • JoelJacobson 2 months ago

    Here is another interesting BOLT article, this one on PostgreSQL optimization:

    https://vondra.me/posts/playing-with-bolt-and-postgres/

    "results are unexpectedly good, in some cases up to 40%"

    • pfdietz 2 months ago

      That's amazing.

  • stephc_int13 2 months ago

    Instruction Cache and TLB trashing is an often overlooked consequence of code bloat and sometimes of overly aggressive micro-benchmark driven optimization.

    Reorganizing the binary is an interesting approach to minimize the cost, but I think that any performance oriented developer should keep in mind that most projects are rarely dependent on a single hot loop but on many systems working together and competing for space in the cache(s).

    I generally use -Os instead of -O2 and -O3 in my projects, while trying to reduce code bloat to a minimum for that reason.

  • BSDobelix 2 months ago

    One can try it out with CachyOS/Arch:

    https://cachyos.org/blog/2411-kernel-autofdo/

  • OnlyMortal 2 months ago

    Back in the day on the Mac, the order of source files in your project would determine locality in the binary.

    If memory serves, this was with MPW C or maybe CodeWarrior.

    You could see the jump (jmp) instructions use short jumps rather than long ones.

    • rurban 2 months ago

      This is still relevant. I had big success in writing an order optimizer for perl5

    • fsflyer 2 months ago

      The Metrowerks profiler and linker worked together to optimize locality in the binary, the focus was on PowerPC code. The linker could generate the static call tree, but the profiler could generate a dynamic call tree of what was actually called. Separating out the cold portions of the call tree into portions of the executable that didn't get paged in was the goal.

      I worked on the Profiler and I seem to remember that Microsoft was one of the developers that put a bunch of effort into using this to optimize the Office suite on Mac. I remember the release of Word that used it was snappier.

    • teo_zero 2 months ago

      Not only jumps. The Motorola 68000 has a relative addressing mode where any sufficiently near address can be expressed as PC+offset. Offset is 16 bits, thus covering a local range of ±32kB, with the additional benefit of being position-independent, a valuable feature for systems without virtual memory.

      Having learned to program for the Amiga before Intel-based PCs, I was shocked when I realized that the latter are missing that basic feature and position-independent executables must go through run-time relocation!

    • Iwan-Zotow 2 months ago

      same in MS DOS

      you have far and near pointers modifiers

  • kardos 2 months ago

    Does it work with Intel fortran-compiled code?

    • kijiki 2 months ago

      As long as you relink with relocations preserved in the final ELF binary, it should.

  • yxhuvud 2 months ago

    So am I blind or does it not mention the results? Was the result a faster kernel? How big was the difference?

    • jeffbee 2 months ago

      In the actual conference presentation they mention ~2% efficiency gains in a few internal storage systems.

  • vsskanth 2 months ago

    Anyone know of a windows equivalent to BOLT ?

    • neerajsi 2 months ago

      Microsoft had internal tooling very similar to bolt almost 20 years ago. Most of those opts were moved to the compiler in ltcg mode with pgo.

    • Cieric 2 months ago

      Some google searching brought up this. https://learn.microsoft.com/en-us/cpp/build/profile-guided-o... I'm only reading over it now, but I'm going to test it out a bit when I can.

      • dwattttt 2 months ago

        PGO describes the using extra data to guide optimisations, but it doesn't define what those optimisations are.

        Reading the link, there's several that sound like they match what BOLT is applying (Basic Block Optimization, Function Layout, Conditional Branch Optimization, and Dead Code Separation).