Instruction Cache and TLB trashing is an often overlooked consequence of code bloat and sometimes of overly aggressive micro-benchmark driven optimization.
Reorganizing the binary is an interesting approach to minimize the cost, but I think that any performance oriented developer should keep in mind that most projects are rarely dependent on a single hot loop but on many systems working together and competing for space in the cache(s).
I generally use -Os instead of -O2 and -O3 in my projects, while trying to reduce code bloat to a minimum for that reason.
The Metrowerks profiler and linker worked together to optimize locality in the binary, the focus was on PowerPC code. The linker could generate the static call tree, but the profiler could generate a dynamic call tree of what was actually called. Separating out the cold portions of the call tree into portions of the executable that didn't get paged in was the goal.
I worked on the Profiler and I seem to remember that Microsoft was one of the developers that put a bunch of effort into using this to optimize the Office suite on Mac. I remember the release of Word that used it was snappier.
Not only jumps. The Motorola 68000 has a relative addressing mode where any sufficiently near address can be expressed as PC+offset. Offset is 16 bits, thus covering a local range of ±32kB, with the additional benefit of being position-independent, a valuable feature for systems without virtual memory.
Having learned to program for the Amiga before Intel-based PCs, I was shocked when I realized that the latter are missing that basic feature and position-independent executables must go through run-time relocation!
PGO describes the using extra data to guide optimisations, but it doesn't define what those optimisations are.
Reading the link, there's several that sound like they match what BOLT is applying (Basic Block Optimization, Function Layout, Conditional Branch Optimization, and Dead Code Separation).
Here is another interesting BOLT article, this one on PostgreSQL optimization:
https://vondra.me/posts/playing-with-bolt-and-postgres/
"results are unexpectedly good, in some cases up to 40%"
That's amazing.
Instruction Cache and TLB trashing is an often overlooked consequence of code bloat and sometimes of overly aggressive micro-benchmark driven optimization.
Reorganizing the binary is an interesting approach to minimize the cost, but I think that any performance oriented developer should keep in mind that most projects are rarely dependent on a single hot loop but on many systems working together and competing for space in the cache(s).
I generally use -Os instead of -O2 and -O3 in my projects, while trying to reduce code bloat to a minimum for that reason.
One can try it out with CachyOS/Arch:
https://cachyos.org/blog/2411-kernel-autofdo/
Note: that's autoFDO+propeller. This article is about BOLT.
>>BOLT has also recently added support for the kernel.
wanted to see what CachyOS is about. https://www.phoronix.com/review/cachyos-linux-perf/5 it came second place to ClearLinux which is not bad.
Back in the day on the Mac, the order of source files in your project would determine locality in the binary.
If memory serves, this was with MPW C or maybe CodeWarrior.
You could see the jump (jmp) instructions use short jumps rather than long ones.
This is still relevant. I had big success in writing an order optimizer for perl5
The Metrowerks profiler and linker worked together to optimize locality in the binary, the focus was on PowerPC code. The linker could generate the static call tree, but the profiler could generate a dynamic call tree of what was actually called. Separating out the cold portions of the call tree into portions of the executable that didn't get paged in was the goal.
I worked on the Profiler and I seem to remember that Microsoft was one of the developers that put a bunch of effort into using this to optimize the Office suite on Mac. I remember the release of Word that used it was snappier.
Not only jumps. The Motorola 68000 has a relative addressing mode where any sufficiently near address can be expressed as PC+offset. Offset is 16 bits, thus covering a local range of ±32kB, with the additional benefit of being position-independent, a valuable feature for systems without virtual memory.
Having learned to program for the Amiga before Intel-based PCs, I was shocked when I realized that the latter are missing that basic feature and position-independent executables must go through run-time relocation!
same in MS DOS
you have far and near pointers modifiers
Does it work with Intel fortran-compiled code?
As long as you relink with relocations preserved in the final ELF binary, it should.
Thank you!
So am I blind or does it not mention the results? Was the result a faster kernel? How big was the difference?
In the actual conference presentation they mention ~2% efficiency gains in a few internal storage systems.
Anyone know of a windows equivalent to BOLT ?
Microsoft had internal tooling very similar to bolt almost 20 years ago. Most of those opts were moved to the compiler in ltcg mode with pgo.
Some google searching brought up this. https://learn.microsoft.com/en-us/cpp/build/profile-guided-o... I'm only reading over it now, but I'm going to test it out a bit when I can.
PGO describes the using extra data to guide optimisations, but it doesn't define what those optimisations are.
Reading the link, there's several that sound like they match what BOLT is applying (Basic Block Optimization, Function Layout, Conditional Branch Optimization, and Dead Code Separation).