Those are just basic and essential optimizations, nothing too surprising here.
The sum of integers is actually a question I ask developers in interviews (works well from juniors to seniors), with the extra problem of what happens if we were to use floating-point instead of integers.
To those who don't know about compiler optimisation, the replacement with a closed form is rather suprising I'd say, especially if someone with Matt Godbolt's experience of all people is saying it is surprising.
Also this series is targeted towards more of a beginner audience to compilers, thus its likely to be suprising to the audience, even if not to you.
To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, there’s no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.
Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.
Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
He thinks it makes him look clever, or more likely subtlety wants people to think "wow, this guy thinks something is obvious when Matt Godbolt found it surprising".
This kind of question is entirely useless in an interview. It's just a random bit of trivia that either a potential hire happen to have come across, or happens to remember from math class.
What type of positions are you interviewing for? Software development is a big tent and I don't think this would be pertinent in a web dev interview, for example.
Nothing is surprising once you know the answer. It takes some mental gymnastics to put yourself in someone else's shoes before they discovered it and thus making it less "basic".
I'm actually surprised that gcc doesn't do this! If there's one thing compilers do well is pattern match on code patterns and replace with more efficient ones; just try pasting things from Hacker's Delight and watch it always canonicalise it to the equivalent, fastest machine code.
This particular case isn't really due to pattern matching -- it's a result of a generic optimization that evaluates the exit value of an add recurrence using binomial coefficients (even if the recurrence is non-affine). This means it will work even if the contents of the loop get more exotic (e.g. if you perform the sum over x * x * x * x * x instead of x).
More similar optimizations: https://matklad.github.io/2025/12/09/do-not-optimize-away.ht...
That one is called scalar evolution, llvm abbreviates it as SCEV. The implementation is relatively complicated.
This exact content was posted a few months ago. Is this AI or just a copy paste job?
Those are just basic and essential optimizations, nothing too surprising here.
The sum of integers is actually a question I ask developers in interviews (works well from juniors to seniors), with the extra problem of what happens if we were to use floating-point instead of integers.
To those who don't know about compiler optimisation, the replacement with a closed form is rather suprising I'd say, especially if someone with Matt Godbolt's experience of all people is saying it is surprising.
Also this series is targeted towards more of a beginner audience to compilers, thus its likely to be suprising to the audience, even if not to you.
For Matt, the creator of compiler explorer, those are surprises.
For you are essentials.
You and the juniors you hire must have a deeper knoledge than him.
You don't have to be an expert in compiler design to make godbolt in fairness, although he does know a lot.
I spend a lot of time looking at generated assembly and there are some more impressive ones.
As i said you must have a deeper knoledge than him.
It would be great if you shared it with the world like Matt does instead of being smug about it.
Why would he need this information? It's not pertinent to running this service.
To provide the solution to the second part of the question, there is no closed-form solution. Since floating point math is not associative, there’s no O(1) optimization that can be applied that preserves the exact output of the O(n) loop.
Technically there is a closed form solution as long as the answer is less than 2^24 for a float32 or 2^53 for a float64, since below those all integers can be represented fully by a floating point number, and integer addition even with floating point numbers is identical if the result is below those caps. I doubt a compiler would catch that one, but it technically could do the optimisation and have the exact same bit answer. If result was intialised to a non-integer number this would not be true however of course.
A very good point! I didn’t think of that.
Im curious what exactly you ask here. I consider myself to be a decent engineer (for practical purposes) but without a CS degree, and I might likely have not passed that question.
I know compilers can do some crazy optimizations but wouldn't have guessed it'll transform something from O(n) to O(1). Having said that, I dont still feel this has too much relevance to my actual job for the most part. Such performance knowledge seems to be very abstracted away from actual programming by database systems, or managed offerings like spark and snowflake, that unless you intend to work on these systems this knowledge isn't that useful (being aware they happen can be though, for sure).
He thinks it makes him look clever, or more likely subtlety wants people to think "wow, this guy thinks something is obvious when Matt Godbolt found it surprising".
This kind of question is entirely useless in an interview. It's just a random bit of trivia that either a potential hire happen to have come across, or happens to remember from math class.
What type of positions are you interviewing for? Software development is a big tent and I don't think this would be pertinent in a web dev interview, for example.
Nothing is surprising once you know the answer. It takes some mental gymnastics to put yourself in someone else's shoes before they discovered it and thus making it less "basic".
https://xkcd.com/1053/
It’s neat. I wonder if someone attempted detecting a graph coloring problem to replace it with a constant.
If you now have a function where you call this one with an integer literal, you will end up with a fully inlined integer answer!
I'm actually surprised that gcc doesn't do this! If there's one thing compilers do well is pattern match on code patterns and replace with more efficient ones; just try pasting things from Hacker's Delight and watch it always canonicalise it to the equivalent, fastest machine code.
This particular case isn't really due to pattern matching -- it's a result of a generic optimization that evaluates the exit value of an add recurrence using binomial coefficients (even if the recurrence is non-affine). This means it will work even if the contents of the loop get more exotic (e.g. if you perform the sum over x * x * x * x * x instead of x).