Int a = 5; a = a++ + ++a; a =? (2011)

(gynvael.coldwind.pl)

88 points | by e-topy 2 days ago ago

140 comments

  • susam 21 hours ago

    The code in the post seems very similar to the one in my own post from 2010: https://susam.net/sequence-points.html

      int a = 5;
      a += a++ + a++;
    
    I do remember that this particular code snippet (with a = 5, even) used to be popular as an interview question. I found such questions quite annoying because most interviewers who posed them seemed to believe that whatever output they saw with their compiler version was the correct answer. If you tried explaining that the code has undefined behaviour, the reactions generally ranged from mild disagreement to serious confusion. Most of them neither cared about nor understood 'undefined behaviour' or 'sequence points'.

    I remember one particular interviewer who, after I explained that this was undefined behaviour and why, listened patiently to me and then explained to me that the correct answer was 17, because the two post-increments leave the variable as 6, so adding 6 twice to the original 5 gives 17.

    I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?

    • kentm 5 hours ago

      IMO, The only reasonable answer if asked this in an interview is “I would not write code where I have to know the answer to this question”

      These sorts of things are neat trivia to learn about things like sequence points but 99.9% of the time if it matters in your codebase you're writing something unmaintainable.

      • tzs 3 hours ago

        > IMO, The only reasonable answer if asked this in an interview is “I would not write code where I have to know the answer to this question”

        That's half of a reasonable answer. The other half is "but I do know the answer so if I see it when reviewing or working on someone else's code I can flag it or rewrite it, and explain to them why it is bad".

        • amelius 3 hours ago

          You might still make a mistake, even if you think you know the answer. It's much better to instrument the code to figure it out, or write a short test program.

          • bestouff 3 hours ago

            It's Undefined Behavior. So you can instrument all you want, the answer will still be wrong. You'll capture what your particular compiler does under some particular conditions (opt flags, surrounding code, etc.) but that will not be representative of what can happen in the general case (hint : anything can happen with UB).

            • tengwar2 2 hours ago

              Not nasal demons in this case (https://groups.google.com/g/comp.std.c/c/ycpVKxTZkgw/m/S2hHd...): thaumasiotes shows that we can expect a numeric answer.

              • _kst_ an hour ago

                I don't see the name "thaumasiotes" at that link, nor do I see anything relevant to the code in the title.

                The behavior of "int a = 5; a = a++ + ++a;" is undefined. There is no guarantee of a numeric result, because there is no guarantee of anything.

                • susam an hour ago

                  I believe they were referring to thaumasiotes's thread here: https://news.ycombinator.com/item?id=48141294

                  I think the objection thaumasiotes has raised there is valid and I have made an attempt to answer it as well in the same thread.

            • amelius 3 hours ago

              It doesn't matter if the answer is wrong. You run the test program and then replace the code by the answer. This basically weeds out the UB.

              • yongjik 2 hours ago

                But since it is a UB, there's no guarantee that your test program produces the same result as the same code running on production, even if you have the same compiler.

                • amelius 2 hours ago

                  That's very unlikely, and in the worst case you've reduced a difficult bug into an easier to understand bug.

            • thaumasiotes 3 hours ago

              > It's Undefined Behavior.

              Susam's post doesn't make this clear. The quotes from K&R say that the modifications to the variable may take place in any order, but they don't directly say that doing this is Undefined Behavior, which would make it permissible to do anything, including e.g. interpreting the increments as decrements.

              The C99 standard is quoted saying this:

              >> Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression.

              It's possible that something else in the standard defines noncompliance with this clause as Undefined Behavior. But that's not the most intuitive interpretation; what this seems to say, to me, is that the line of code `a = a++ + ++a` should fail to compile, because it's not in compliance with a requirement of the language. Compilers that produce any result at all are suffering from a bug.

              (It seems more likely that the actual intent is to specify that, given the line of code `b = a++ + ++a`, with a initially equal to 5, the compiler is required to ensure that the value stored at the address of a is never equal to 6 - that it begins at 5, and at some indefinite point it becomes 7, but that there is no intermediate stage between them. But I find the 'compiler failure on attempt to put multiple modifications between two sequence points' interpretation preferable.)

              • vilhelm_s an hour ago

                The "shall" in the standard means it's undefined behavior. This is explained in the "Conformance" section,

                > 2. If a ‘‘shall’’ or ‘‘shall not’’requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words ‘‘undefined behavior’’ or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe ‘‘behavior that is undefined’’.

                Compilers will not refuse to compile the code, indeed the blog post we are all commenting on reports the results from a bunch of different compilers. Historically the reason the C standard specified a lot of undefined behavior is that the actually existing C compilers at the time compiled the code but disagreed about the output.

              • Dylan16807 2 hours ago

                Compilers are not able to prevent you from violating must/shall in the general case. So they're not held to that bar. Unless the standard says not to compile it, it's not a compiler bug.

                Also, imagine a situation where the line of code actually lists three different variables, but all three of them are passed in by address. It quickly becomes impossible for the compiler to know you violated the spec by reusing the same variable. And even optimizations that make sense here could corrupt the value pretty badly and possibly lead to worse errors.

                • thaumasiotes 2 hours ago

                  > Also, imagine a situation where the line of code actually lists three different variables, but all three of them are passed in by address. It quickly becomes impossible for the compiler to know you violated the spec by reusing the same variable.

                  OK. What is the value of a spec to which compliance is impossible?

                  • aidenn0 an hour ago

                    The compiler does comply to the spec. It's the program that fails to comply with the spec. It's definitely possible to write programs that have no undefined behavior.

                  • Dylan16807 2 hours ago

                    Welcome to C.

                    But more seriously it's the job of the program to not do undefined things.

              • susam an hour ago

                I searched K&R to see if there is any language that implies a += a++ + a++ to be undefined. I couldn't find anything. I found the following excerpt which is closest to what I claim, in spirit. But still, it does not explicitly spell out that an object must not be modified more than once between sequence points. From § A.7 Expressions:

                > The precedence and associativity of operators is fully specified, but the order of evaluation of expressions is, with certain exceptions, undefined, even if the subexpressions involve side effects. That is, unless the definition of the operator guarantees that its operands are evaluated in a particular order, the implementation is free to evaluate operands in any order, or even to interleave their evaluation. However, each operator combines the values produced by its operands in a way compatible with the parsing of the expression in which it appears. This rule revokes the previous freedom to reorder expressions with operators that are mathematically commutative and associative, but can fail to be computationally associative. The change affects only floating-point computations near the limits of their accuracy, and situations where overflow is possible.

                So I think, the text in K&R serves as warning against writing such code, at best. The C99 draft has more relevant language. From § 4. Conformance:

                > If a "shall" or "shall not" requirement that appears outside of a constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard by the words "undefined behavior" or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe "behavior that is undefined".

                This along with the § 6.5 excerpt already mentioned in my post implies a += a++ + a++ to be undefined. When I get some more time later, I'll make an update to my post to include the § 4. Conformance language too for completeness.

                Thank you for the nice comment!

        • IshKebab 3 hours ago

          No it isn't. You don't need to know the answer to know that it is bad code. The very fact that it isn't clear shows that.

          • tialaramex 19 minutes ago

            Right, the feedback I'd expect in a code review interview is something like "This is unclear or wrong, write what you actually meant".

            That's the feedback I would want, and it's the feedback I give to my colleagues in reviews. Actually I tend to be too verbose, so you might get a full paragraph explaining what the ISO document says and that you shouldn't assume it does whatever it is your compiler says.

            My actual feelings for this specific case are that the language is defective, but if we're wedded to a defective language then the reviews need to call out such usage.

    • LPisGood 5 hours ago

      In some sense, and without the interviewer knowing, that is actually a great scenario for an interview.

      If you can convince someone in a position of authority that they’re wrong about something technical without upsetting them then you’re probably a good culture fit and someone who can raise the average effectiveness of your team.

      • rcxdude 5 hours ago

        Or, also, in the reverse direction, if the interviewer is wrong about it and can't be convinced otherwise, it's probably not a great place to work.

      • bluGill 5 hours ago

        I know I did recommend someone after the interview because I looked it up and they were right. Great person to work with. Though I fully understand why most would hesitate.

      • wat10000 4 hours ago

        The best interview questions spawn discussions. This is a pretty good one for that. We could dive into what makes it UB, why a particular compiler might do it a certain way, what results we'd likely see from other compilers, and why the standard might say that this sort of thing is UB.

        "What does this produce?" and expecting an answer of "17" is a bad question even if UB didn't mean the expected answer is wrong.

        • LPisGood 4 hours ago

          I don’t work a ton with C, but I wonder how C programmers keep track of what behavior is and is not defined. It seems like there are many possible edge cases.

          • wat10000 4 hours ago

            We get by on a combination of matching patterns (any pointer cast gets a lot of scrutiny, for example), compiler warnings, tools like UBSan, debugging when things go wrong, and sheer dumb luck.

            Having an understanding of how the code gets transformed into machine code helps. For this case, there's the basic idea that `a++` will boil down to three basic conceptual operations: fetch, add, and store, and those can be potentially interleaved with other parts of the statement. In something like `a++ + ++b` the interleaving doesn't affect the outcome no matter how it's done. In `a++ + ++b` the interleaving can affect the outcome, and that's your sign that something might be wrong.

            Any memory safety issue in C code had to involve UB at some point. And you can see how prevalent those are, and deduce how not-particularly-great we are at keeping track of UB.

            • MaxBarraclough 2 hours ago

              > Having an understanding of how the code gets transformed into machine code helps

              I'm not sure about that. Knowing assembly is not a substitute for knowing how the language is defined. Sometimes C/C++ programmers with some assembly knowledge reason themselves into thinking that what they're asking of the language must have well-defined behaviour, when in fact it's undefined behaviour. It doesn't really matter whether interleaving order can change the output. (++i)++ is, apparently [0], undefined behaviour in C but has well defined behaviour in C++.

              [0] https://stackoverflow.com/a/58841107

              • wat10000 7 minutes ago

                I don't mean assembly in this case, but something more like the compiler's view of the code. a++ can be broken down into more primitive operations, and might actually be, depending on how the compiler is implemented. The fact that the ordering of those more primitive operations with respect to other operations isn't very tightly constrained is something you'd just have to know about the language, I suppose.

          • IshKebab 3 hours ago

            They don't really. In fact there are many things that are technically UB but are so common that compilers can't really treat them as UB. E.g. type punning via unions.

            • IcyWindows 11 minutes ago

              Yeah, undefined behavior just means not defined in the specification.

              I would argue that most languages only have one compiler so it doesn't matter what is in the specification.

            • el_pollo_diablo 2 hours ago

              Type punning via unions is not UB in C in general, but it is in C++ IIRC.

              I write "in general" because, as with other forms of memory reinterpretation (memcpy or copy through a character type), evaluating a trap representation triggers UB.

    • vcdk 2 hours ago

      Well... tried it on macOS using vanilla gcc, the results surprised me:

        $ /bin/cat x.c; gcc -w -o x x.c; ./x
        #include <stdio.h>
        
        int main()
        {
            int a = 5;
            a += a++ + a++;
            printf("a = %d\n", a);
        }
        a = 18
      
      Not what I expected. This must be how it works:

      - The first a++ expression results in 5, after a = 6 - The second a++ expression results in 6, after a = 7 - Only then the LHS a is evaluated for the addition-assignment, so we get: a = 7 + 5 + 6 = 18

      • rmu09 2 hours ago

        the original question has a=, you have a+=

        • Izkata an hour ago

          They're using the version from the top comment, not the post. It also switches the pre-increment to post-increment.

    • chasd00 3 hours ago

      Genuinely curious, so this is undefined behavior and depends on the compiler. I get that. Java, and other languages, can do these same operations but their compilers produce bytecode that runs on a virtual machine (JVM) compiled to machine code just-in-time. Would this same code in Java possibly yield different results based on the platform the JVM was running on because of the platform specific JIT compiler? Maybe that's part of the origin of the phrase "write once, test everywhere".

      • Karliss 2 hours ago

        The UB comes from how C++ standard defines expression sequencing which is not relevant for Java. Languages other than C++ typically define such details more strictly so there is no UB or even concept of UB. JIT compilers don't change it as any non toy JIT will generate native instructions directly or through intermediate representation (instead of generating C++ text and passing that through regular C++ compiler) both of which should have much stricter semantics compared to what C++ guarantees.

      • dmoy 3 hours ago

        > Would this same code in Java possibly yield different results based on the platform the JVM was running on because of the platform specific JIT compiler?

        No, and it's also well defined in languages like C#.

        If we're talking about this specific example at least. No sequence point issues like that in Java.

      • Crespyl 2 hours ago

        It's been quite a while, but IIRC, in Java these statements actually do have a defined behavior.

        The ++x is a "pre-increment", meaning the value of the variable is incremented prior to evaluating the expression, while the "post-increment" "x++" is the other way around: the expression evaluates to x, then x is incremented afterwards.

        All expressions are left-to-right.

        • tredre3 2 hours ago

          That behavior is inherited from C. The pre/post increment behavior is actually the same in every language that uses them. The priority of operation is also usually the same as well.

          The reason the question is tricky is because those operators change the value of a as the full expression is progressively executed.

          It's not immediately clear to me what the answer in Java would be.

          Just take a++ + ++a for example:

          If the value if `a` is hoisted by the jvm then it could be 5++ + ++5, so 5 + 6.

          But if it's executed left to right and `a` is looked up every time, then it becomes 5++ + ++6, so 5 + 7.

      • froh 2 hours ago

        I'd be badly surprised if the jvm jit went through C, so if this monstrosity is well defined in Java it's well defined once well defined everywhere.

        but still, if it were, it was and remained, as gp points out, bad practice...

    • mike_hock 5 hours ago

      Do you want a job at a place where someone who doesn't understand UB makes the hiring decisions?

      • grahamburger 3 hours ago

        Sometimes, even in tech, you just need a job.

      • ketzu 3 hours ago

        I think your options are very limited if you look for places that have people that truly understand UB, even less so the hiring people.

    • tete 3 hours ago

      > I found such questions quite annoying because most interviewers who posed them seemed to believe that whatever output they saw with their compiler version was the correct answer.

      Other than the job for most programmers having nothing to do with whether they know the outcome, because hopefully they'd never write something like it or clean it up. And IF they found it they'd hopefully test it - given that it appears to be compiler dependent anyways.

    • jcalvinowens 2 hours ago

      Both major compilers yell at you for this nowadays... it's pretty unforgivable IMHO for somebody to be asking it as an exam or interview question if the right answer isn't "undefined":

          <source>:5:10: warning: multiple unsequenced modifications to 'a' [-Wunsequenced]
              5 |     a = a++ + ++a;
                |         
      
      
          <source>:5:7: warning: operation on 'a' may be undefined [-Wsequence-point]
              5 |     a = a++ + ++a;
                |     ~~^~~~~~~~~~~
    • everyone an hour ago

      The interviewer asking stuff like that is a good sign to leave immediately.

    • p0w3n3d 4 hours ago

      How many tennis balls can fit in a bus?

    • thaumasiotes 3 hours ago

      > I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?

      Are you referring to the type of interview questions where the question is ill-defined and no one should know the answer, or the type where the question is reasonable and well-defined, but the interviewer doesn't know the answer?

      I had a phone screen with Google once where they asked how to determine the length of a stretch of contiguous 1s within an infinite array of 0s. I suggested that, given the starting index i, you can check the index i+2 and then repeatedly square it until you find yourself among the zeroes, after which you can do binary search to find the transition from ones to zeroes.

      The interviewer objected that this will grow the candidate end index too quickly, and the correct thing to do is to check index i+1 and then successively double it until you find the zeroes. We moved on.

      I passed that phone screen. But I still resent it, because I checked the math later and "successive squaring followed by binary search" and "successive doubling followed by binary search" take exactly the same amount of time.

    • SilasX 4 hours ago

      Heh, one time when I got this style of question[1] (but for JavaScript), I took a glance at it and said "Um ... you really shouldn't write code like that." The interviewer replied, "Oh. Yeah. Fair point." And then went on to another question.

      [1] By which I mean predicting the behavior of error-prone code that requires good knowledge of all the quirks of the language to correctly answer.

    • colechristensen 5 hours ago

      >I am very glad these types of interview questions have become less prevalent these days. They have, right? Right?

      I just refuse to do interviews like that any more.

  • Aardwolf 5 hours ago

    What's the reason that C didn't define the order of this?

    The horrible undefined behavior of signed integer overflow at least can be explained by the fact that multiple CPU architectures handling those differently existed (though the fact that C even 'attracts' its ill-defined signed integers when you're using unsigned ones by returning a signed int when left shifting an uint16_t by an uint16_t for example is not as forgivable imho)

    But this here is something that could be completely defined at the language level, there's nothing CPU dependent here, they could have simply stated in the language specification that e.g. the order of execution of statements is from left to right (and/or other rules like post increment happens after the full statement is finished for example, my point is not whether the rule I type here is complete enough or not but that the language designers could have made it completely defined).

    • jcranmer 4 hours ago

      The short answer is because C was designed to give leeway to really dumb compilers on really diverse hardware.

      This isn't quite the same case, but it's a good illustration of the effect: on gcc, if you have an expression f(a(), b()), the order that a and b get evaluated is [1] dependent on the architecture and calling-convention of f. If the calling convention wants you to push arguments from right to left, then b is evaluated first; otherwise, a is evaluated first. If you evaluate arguments in the right order, then after calling the function, you can immediately push the argument on the stack; in the wrong order, the result is now a live variable that needs to be carried over another function call, which is a couple more instructions. I don't have a specific example for increment/decrement instructions, but considering extremely register-poor machines and hardware instruction support for increment/decrement addressing modes, it's not hard to imagine that there are similar cases where forcing the compiler to insert the increment at the 'wrong' point is similarly expensive.

      Now, with modern compilers using cross-architecture IRs as their main avenue of optimization, the benefit from this kind of flexibility is very limited, especially since the penalties on modern architectures for the 'wrong' order of things can be reduced to nothing with a bit more cleverness. But compiler developers tend to be loath to change observable behavior, and the standards committee unwilling to mandate that compiler developers have to modify their code, so the fact that some compilers have chosen to implement it in different manners means it's going to remain that way essentially forever. If you were making a new language from scratch, you could easily mandate a particular order of evaluation, and I imagine that every new language in the past several decades has in fact done that.

      [1] Or at least was 20 years ago, when I was asked to look into this. GCC may have changed since then.

      • compiler-guy 3 hours ago

        gcc used to do this back in the day. Parameter expressions left to right on x86, and right to left on Sparc. I spent a week modifying a bunch of source code, removing expressions with side effects from parameter lists, into my own temporary variables, so that they would all evaluate in the same order.

      • wat10000 4 hours ago

        I'd say it's more like C was designed from really dumb compilers on really diverse hardware. The standard, at least the early versions of it, was more to codify what was out there than to declare what was correct. For most things like this in the standard, you can point to two pre-standardization compilers that did it differently.

        • AnimalMuppet 3 hours ago

          Kind of both? There were pre-standard compilers, but when they created the standard, they tried to make it so that one could write really dumb compilers and still fulfill the standard.

    • fweimer 4 hours ago

      Sethi-Ullman register allocation reorders subexpression evaluation to achieve efficient register allocation: https://dl.acm.org/doi/10.1145/321607.321620

      With modern register allocators and larger register sets, code generation impact from following source evaluation is of course lower than it used to be. Some CPUs can even involve stack slots in register renaming: https://www.agner.org/forum/viewtopic.php?t=41

      On the other hand, even modern Scheme leaves evaluation order undefined. It's not just a C issue.

    • marcosdumay 5 hours ago

      Applying the increment or decrement operators over the same variable more than once on the same line should be a compile-time error.

      Anyway, yes, this one example has an obvious order it should be applied. But still, something like it shouldn't be allowed.

      • nayuki 4 hours ago

        > Applying the increment or decrement operators over the same variable more than once on the same line should be a compile-time error

        That would be nice, but don't forget the more general case of pointers and aliasing:

            int a = 5;
            int *pa = &a;
            printf("%d", (a++ + ++*pa));
        
        The compiler cannot statically catch every possible instance of a statement where a variable is updated more than once.
        • marcosdumay 4 hours ago

          Well, aliased updates are undefined behavior already.

          • not_a_bijection 3 hours ago

            Not in C, unless at least one of the pointers were marked `restrict`.

      • saghm 5 hours ago

        Honestly, having increment in expressions rather than a statement feels like more of a footgun than a benefit. Expressions shouldn't mutate things.

        • cesaref 4 hours ago

          I think the history of this is that these operations were common with assembly programmers, so when C came along, these were included in the language to allow these developers to feel they weren't leaving lots of performance behind.

          Look at the addressing modes for the PDP-11 in https://en.wikipedia.org/wiki/PDP-11_architecture and you'll see you can write (R0)+ to read the contents of the location pointed to by R0, and then increment R0 afterwards (so a post increment).

          Back in the day, compilers were simple and optimisations weren't that common, so folding two statements into one and working out that there were no dependencies would have been tough with single pass compilers.

          You could argue that without such instructions, C wouldn't have been embraced quite so enthusiastically for systems programming, and the world would have looked rather different.

          • monocasa 2 hours ago

            Additionally, those indirect memory instructions ended up disappearing because it complicated virtual memory implementations. It was a pain in the ass to describe the multiple places in memory an instruction could be accessing and which actually faulted to a fault handler, not to mention having to roll back all that state on more complex designs.

          • IshKebab 3 hours ago

            I worked on a more recent custom AI ISA that had that too. Pretty neat; I'm surprised it's not more common. I guess it doesn't matter so much now that memory is so much slower than ALU ops.

        • zarzavat 4 hours ago

          Python recently went the other way and added an assignment expression. I actually wish more languages would go further and add statement expressions instead of having to imitate them with IIFEs.

          C just wouldn't be C without things like a[i++]

          • saghm 3 hours ago

            If the past few weeks of CVEs indicate anything, it's that C being C maybe isn't a good thing...

        • marcosdumay 4 hours ago

          Those things are for pointer golf and writing your entire logic inside the if statement.

          Both are favorite idioms of C developers. And they are ok if done correctly, clearer than the alternative. They are also unnecessary in modern languages, so those shouldn't copy it (yeah, Python specifically).

        • kibwen 5 hours ago

          In any language where the practice of iteration isn't achieved via C-style for-loops, having an operator devoted to increment just doesn't make sense (let alone four operators, for each of pre/post-increment/decrement). This is one of those backwards things that just needs to be chucked in the bin for any language developed post-2010.

          • fc417fc802 4 hours ago

            When used well it makes for compact readable code. I don't see what it has to do with for loops or operators specifically. For example you can do the same in scheme while iterating by means of tail recursion.

            • kibwen 4 hours ago

              > I don't see what it has to do with for loops or operators specifically.

              The reason that these operators pull their weight in C is because iteration over arrays is achieved by manual incrementation (usually via the leading clauses of the for-loop) followed by direct indexing. Languages with a first-class notion of iteration don't directly index in this way, which overwhelmingly eliminates not only the vast majority of array indexing operations in codebases but also the need to manually futz with the inductive loop variable. Case in point, Rust doesn't have `++` in any form, and it doesn't miss it, because Rust has first-class iteration; on the then relatively rare occasion where you want do want to increment, you can do `+=1`, which doesn't have the footguns of `++` due to assignment being a statement rather than expression, while leading to a simpler language due to leveraging the existing `+=` syntax rather than needing a whole new set of operators.

              • fc417fc802 3 hours ago

                For loops are hardly the only usecase and built in iteration constructs frequently fall short. For example any mildly complex loop that involves pointer juggling can benefit.

                > which doesn't have the footguns of `++` due to assignment being a statement rather than expression,

                So then I implement the local equivalent of inc( v ) and ... same issue, right? Plus with rust macros is there any technical reason you can't trivially implement ++ for yourself? That's the case for most lisps that I touched on earlier.

                • marcosdumay 27 minutes ago

                  In Rust you hide all kinds of error prone iterations behind the "iterator" interface. Both the "for(int x=0;..." and the "while(list[i++])" are implemented at the standard library.

                  People tend to use FP abstractions for the "x[i++] = f(y[j++])" though, not iteration.

                • kibwen 2 hours ago

                  > For example any mildly complex loop that involves pointer juggling can benefit.

                  I'd say that when you're writing a mildly complex loop that involves pointer juggling, one should prefer to be defensive and explicit rather than cleverly trying to compress everything into one-liners.

                  > So then I implement the local equivalent of inc( v ) and ... same issue, right?

                  This isn't done in Rust because there's no benefit. It's rare to find an occasion where it's necessary to do something tricky enough to forego using iterators, and when working with raw pointers Rust code just plain doesn't use basic addition for pointer arithmetic; instead it has a variety of pointer arithmetic methods for being explicit about the desired semantics (e.g. ptr::add, ptr::offset, ptr::wrapping_add, etc).

                  > Plus with rust macros is there any technical reason you can't trivially implement ++ for yourself?

                  There's not, but people might look at you sideways. Here, I implemented it for you: https://play.rust-lang.org/?version=stable&mode=debug&editio... . It expands to nested blocks with internal assignments, which results in a well-defined semantics following the defined order of evaluation in Rust.

          • dhosek 3 hours ago

            I always hate C-style for-loops because even thought I learned C over 40 years ago, I can never remember whether the increment comes before the test or the test comes after the increment. Fortunately, modern IDEs let me continue to be ignorant on those occasions when they’re necessary (usually because I need the index for some reason).

        • throw_await 5 hours ago

          Wenn wouldn't have pearls like while (dst++ = src++);

    • AlotOfReading 5 hours ago

      It's valuable for compilers to be able to choose the instruction scheduling order. Standards authors try not to unnecessarily bind implementors. If post increment happened after the full statement is finished, then the original value has to be maintained until the next sequence point. Maybe the compiler will be smart enough to elide that, maybe not, but it's a lot more difficult to fix those kinds of edge cases than to say sequencing is undefined.

      • Aardwolf 5 hours ago

        But this is not valuable if doing so results in different numerical results, and I think that will always happen if ++ is executed at different times, there's no point in a compiler optimizing pointless code that can silently give different results elsewhere

        • Karliss an hour ago

          The same rule which makes the evaluation order of a++ + ++a unsequenced also applies to (x+y+z+a+b+c) where x,y,z could be any expression (in a sane case on separate variables and without mix of pre/post increments). Breaking questionable code and introducing UB where reordering changes result is just a side effect of this.

          Just switching between left to right or right to left wouldn't be that useful but it also permits to interleave the subexpression evaluation. Grouping memory fetches/writes, taking into account how many execution units and registers of different kinds a CPU has can have some performance benefits.

          For example if you have something like `++a[0] + ++a[1] + ++a[2] + ++a[3]` instead of evaluating each increment one by one both GCC and Clang will vectorize it loading all 4 values from memory using single simd instruction, incrementing and then writing result back to memory. And if you add fifth one (but not 8) which needs to be handled using regular instruction, that will be done after the first 4. If standard defined that left subexpression of addition is fully evaluated before the right expression that wouldn't be allowed.

        • AlotOfReading 5 hours ago

          Your compiler does many optimizations that break numerical reproducibility, especially in floats. I reviewed a PR the other day that wrote X=AB+(CD)+E;

          And when I checked 3 different compilers, each of them chose a different way to use FMAs.

          Even with integer math, you can get different numerical results via UB (e.g. expressions with signed overflow one way and not another).

      • xigoi 5 hours ago

        It would only make a difference in cases that are currently UB, so there is no program valid under current C that would be pessimized by this change.

        • AlotOfReading 5 hours ago

          It's a language feature that was in K&R, and the rules around sequencing were introduced in C89. There were good reasons to believe it would pessimize code in the following decades. Dennis Richie himself pointed out that Thompson probably added the operators because compilers of the time were able to generate better code that way.

    • tardedmeme 5 hours ago

      The C standard doesn't define things where two or more historical compilers disagreed and there wasn't an obviously correct way. This is defined behavior (left to right, assignment last) in Java, which is a different language.

    • rcxdude 5 hours ago

      Probably because when C was standardised there were already multiple implementations, and this was an area where implementations differed but it wasn't viewed as important enough to bring them in line with one approach.

    • bluGill 5 hours ago

      The only other reasonable option is to make such garbage a compile time error. There is no reasonable definition of what code like that should do and if you write it in the real world you need find better job fit. I'd normally say McDonald's is hiring, but they don't want people like that either

    • TacticalCoder 4 hours ago

      > What's the reason that C didn't define the order of this?

      I didn't open TFA but my first thought was "Is this even defined?".

      It kinda make sense that suck fucktardedness could be not defined.

    • binaryturtle 3 hours ago

      It's defined. And called "operator precedence", both post/pre-increment have a higher precedence than the single "+".

      At least according to this: https://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Exp...

      I think the main confusion here comes from the fact that "a" is just a value, not a pointer, where it matters when the value/address which the pointer points at is accessed (before of after the increment of the pointer's own 'value').

      Anyway… my C skills are rusty. Maybe I get it wrong. :) In any case I always would use brackets to avoid any ambiguity in constructs like this.

      • pavon 3 hours ago

        Nope. Order of evaluation and operator precedence are completely unrelated. They should have been defined to be the same, but instead order of evaluation was left undefined. So if you write ++a + a++, operator precedence means this will be interpreted as (++a) + (a++), not say ++(a + a)++, but it is up to the compiler whether to execute ++a or a++ first, rather than executing them left to right.

        • binaryturtle 2 hours ago

          Sometimes it helps to test. Which I just did. :-)

          Actually the compiler (at least clang) warns about this:

              $ gcc -W -Wall test.c -o test
              test.c:8:7: warning: multiple unsequenced modifications to 'a' [-Wunsequenced]
                      a = a++ + ++a;
                           ^    ~~
              1 warning generated.
          
          The undefined behaviour stems from the fact that "a" is modified multiple times between the "sequence points" (so it's irrelevant to the actual problem that this happens with ++, --, pre-, or, post-, or in which order) We only can modify the variable safely once on the right side without entering bizarro world.

          A construct like this certainly can be confusing.

  • sangeeth96 5 hours ago

    I had to fight through school and university in India with my teachers who believed these were legit questions to ask in written exams. Can't 100% blame them since almost all standard-issue textbooks had them and claimed they'd give predictable output. I thought the same until I noticed the weirdness when running them across different compilers and after I read about UB, sequence points and similar quirks in books that are not total garbage.

    Luckily, I ended up with smug smiles in all those cases after showing them the output from different compilers.

  • Boxxed 5 hours ago

    > The interesting thing here is the Undefined Behavior (UB), well... actually two UBs, thanks to which there are three possible correct answers: 11, 12 and 13.

    No, if you invoke undefined behavior any result at all is possible.

    • gynvael 3 hours ago

      Hey! Author here :)

      So let me start by saying that that blog post was written was 15 years ago and I don't even remember the details of it and what I've written there. But, I have a hot-take on this topic you've touched on!

      From a programmer perspective, you are absolutely right. The behaviour is undefined, end of discussion. A programmer should never rely on what they observe as the effective behaviour of an UB. A programmer must avoid creating situations in code that could result in the execution flow venturing into the areas of UB. And - per C and C++ standards - results of UB can be anything (insert the old joke about UB formatting one's disk being a formally correct behaviour).

      However, I'm a security researcher, and from the security point of view - especially on the offensive side - we need to know and understand the effective behaviours of UBs. This is because basically all "low-level" vulnerabilities in C/C++ are formally effects of UBs. As such, for the security crowd, it still makes sense to investigate, understand, and discuss the actual observed effects of UBs, especially why a compiler does this, what are the real-world actual variants of generated code (if any) for a given UB for this and other compilers, how can this be abused and exploited, and so on.

      My point being - there are two sides to this coin.

      • _kst_ an hour ago

        Agreed.

        As a programmer, the solution to "int a = 5; a = a++ + ++a;" is to decide what you result you wanted, and write code that will produce that result, and probably to pass options to the compiler that tell it to detect this kind of problem and print a warning. (On my system, the result happens to be 12; if that's what I want, I'll write "int a = 12;").

        But if you have an existing program that includes that code, it can be useful to look into the actual behavior (for all the compilers that might be used to compile the code, with all possible options, on all possible target systems). Fixing the code should be part of that process, but you might still have running systems with the old bad code, and you need to understand the risks.

        But producing some numeric result is not the only possible behavior, even in real life. Compilers can assume that the code being compiled does not have undefined behavior, and generate code based on that assumption. The results can be surprising.

        As for formatting your disk, that's not just a theoretical risk. If a program has enough privileges that it can format your disk deliberately, it's possible that it could do so accidentally due to undefined behavior (for example, if a function pointer is corrupted).

    • bombcar 5 hours ago

      I feel we need another category - unspecified behavior. I think everyone would agree the compiler should putout ONE of those answers and that nasal demons would be out of spec.

      The problem is that it’s not specified which should be picked, but all pick something.

      • fc417fc802 5 hours ago

        I agree what you say seems reasonable at a glance. But (IIUC) the issue is that for optimization we want the compiler to assume that UB doesn't happen in order to constrain the possible code paths. So if it goes some distance down a possible execution branch and discovers UB it can trim the subtree. At that point "anything can happen" becomes an (approximate) reality.

        The obvious counterpoint in this particular instance is that there's no good reason not to make such an awful expression a compile time error.

        I also personally think that evaluation order should be strictly defined. I'm unclear if the current arrangement ever offers noticable benefits but it is abundantly clear that it makes the language more difficult to reason about.

        • IshKebab 3 hours ago

          As I understand it UB was not really intended to be for optimisation. It was so that C could compile on wildly different architectures that existed at the time.

          Today we don't have nearly the variety of architectures, so they in theory C doesn't need nearly as much UB (like more modern languages).

          Although there is one modern case where C's "anything goes" attitude has actually helped: CHERI works pretty well with C/C++ even though pointers are double the size they normally are, because doing so many things with pointers is UB (I assume because of segmented memory). CHERI is a slightly awkward target for Rust because Rust makes more assumptions about pointers - specifically that pointers and addresses are the same size.

          • bombcar 13 minutes ago

            Which is a form of optimization- if you don’t require something that may be incredibly difficult on a given CPU it makes portability easier.

            The reality is these are all edge conditions rarely encountered.

      • Boxxed 2 hours ago

        That already exists, and it is in fact called unspecified behavior. Order of function argument evaluation is unspecified, for instance.

      • compiler-guy 5 hours ago

        The C and C++ standards include "Implementation defined behavior", which means that a conforming implementation can do whatever it wants, as long as it specifically documents and sticks to that behavior.

        This doesn't really help portability all that much.

        • Karliss 3 hours ago

          That's a different category. Standard defines and uses all 3 "undefined", "implementation defined" and "unspecified" behavior. The difference between last two is that compiler isn't required to document exact behavior. Unlike UB triggering it doesn't automatically summon nasal demons and range of possible behaviors is usually described by standard.

  • compiler-guy 5 hours ago

    With undefined behavior, a conforming compiler can do anything it wants at all, including generating a program that segfaults or something else.

    But what often happens in practice is that "Bill's Fly-By-Night-C-Compiler-originally-written-in-the-mid-nineties" implemented it in some specific way (probably by accident) and maintains it as a (probably informal) extension. And almost certainly has users who depend on it, and can't migrate for a myriad of reasons. Anyway, it's hard to sell an upgrade when users can't just drop the new compiler in and go.

    At the language level, it is undefined-behavior, and any code that relies on it is buggy at the language level, and non-portable.

    Defining it would make those compiler non-conforming, instead of just dependent on defining something that is undefined.

    Probably the best way forward is to make this an error, instead of defining it in some way. That way you don't get silent changes in behavior.

    Undefined behavior allows that to happen at the language level, but good implementations at least try not to break user code without warning.

    Modern compilers with things like UBSan and such makes changing the result of undefined behavior much less of an issue. But most UB is also, "No diagnostic required", so users don't even know they have in their code without the modern tools.

    • suprjami 3 hours ago

      > including generating a program that segfaults or something else.

      UB = run nethack or Emacs:

      https://feross.org/gcc-ownage/

      We should have kept this behaviour. It would make UB a lot more unpalatable and easy to find.

  • Timwi 4 hours ago

    The statement is valid C#, which has left-to-right execution order and no undefined behavior. The answer is 5 + 7 = 12.

    • chasil 3 hours ago

      Awk also says it's 12.

        awk 'BEGIN{a=5; a = a++ + ++a; print a}'
        12
  • magicalhippo 2 days ago

    Perhaps I'm just naive and/or have forgotten too much C, not that I knew that much, but I'm a bit perplexed as to why this is UB.

    It seems like something that should trigger a "we should specify this" reaction when adding these operators, and there is at least one reasonable way to define it which is fairly trivial and easily implementable.

    • gynvael 3 hours ago

      Yeah, like left-to-right as in JS for example.

  • HelloNurse 8 hours ago

    The final value of a is that if you write this you are fired. It's worse than a racist joke.

  • danbruc 3 hours ago

    My expectation was none of the four presented. Evaluate left to right, a is five, post-increment, pre-increment, a is seven, 5 + 7 = 12. For right to left I would expect pre-increment, a is six, a is still six, post-increment, a is 7, overwrite with 6 + 6 = 12.

  • p0w3n3d 4 hours ago

    On my CS lectures algorithms professor used this pseudo language when writing an algorithm on a whiteboard :

      I <- I++
    
    On the next hour another professor was giving lecture on C++ programming. I asked him the question: what would happen if we compiled

      i = i++
    
    
    He went into some deep elaboration on it, but reassumed that only idiot would write like this...
    • mywittyname 3 hours ago

      Out of curiosity, I checked if gcc would optimize i = i++ out, and it does!

      • _kst_ 43 minutes ago

        What can be optimized out depends on the context.

        If you write:

            int i = 0;
            i = i++;
        
        and never use the value of i, the declaration and assignment are likely to be optimized out. (The behavior of the assignment is undefined, so this is a valid choice).

        If you print the value of i, the compiler can still optimize away the computation, but is perhaps less likely to do so.

        The solution, of course, is not to write code like that. Decide what you want to do, and write code that does that. "i = i++" will never be the answer to "how do I do this?", and wouldn't be even if the behavior were well defined. If you want i to be 1, write "int i = 1;".

  • Someone 5 hours ago

    > The interesting thing here is the Undefined Behavior (UB), well... actually two UBs, thanks to which there are three possible correct answers: 11, 12 and 13.

    There’s UB, so any answer is possible, isn’t it?

  • yason 5 hours ago

    The only point you can conclude out of these discussions, especially in an interview, that it doesn't matter what the answer happens to be on $CC and $ARCH but you wouldn't want anyone to write stuff like that in the first place.

    Failing to recognize the dangers would be an instant fail; knowing that something reeks of undefined behaviour, or even potential UB, is enough: you just write out explicitly what you want and skip the mind games.

  • taylodl 24 minutes ago

    The trait of an experienced C developer is to avoid creating expressions such as this.

  • tombert 4 hours ago

    I have always hated this crap; the fact that I'm not 100% sure the result of this indicates that maybe the ++ operator (pre or postfix) is something that should be avoided?

    I don't do a lot of C anymore, but even when I did, I always would do increments on separate lines, and I would do a +=1, or just a = a + 1. I never noticed a performance degradation, and I also don't think my code was harder to read. In fact I think it was easier since I think the semantics were less ambiguous.

    • amavect 3 hours ago

      I also started doing this. I feel that "b = expr(a); a++;" expresses what I mean better than "b = expr(a++)": store expr(a) in b, then store a+1 in a. Any good compiler will optimize the same.

      After separating a++ onto its own line, replacing a++ with a+=1 or a=a+1 comes down to personal taste in syntax sugar. I vote for a+=1.

      • tombert 2 hours ago

        Yeah exactly, especially for newer people.

        I wouldn't be surprised if someone read `b = expr(a++)` to indicate that `a` is incremented, and then passed into `expr`, especially considering that it is within parentheses. The fact that it does it after passing it in is weird, and not obvious, at least not in my opinion. In my mind, there's no reason not to do what you suggested, or do the increment of `a` on the line before if you want the prefix.

  • vishnugupta 5 hours ago

    I am, thankfully, out of this craziness now but it was fun solving ton of such puzzles from Yashavant Kanetkar books while preparing for campus hiring interviews back in 2000. "Test Your C Skills" in particular. Fun times.

    https://www.scribd.com/document/235004757/Test-Your-C-Skills...

  • deepsun 2 hours ago

    Why do you need Java four times in tests? They are all the same.

    The main, I would say, defining, feature of Java is "no of undefined behavior". Aka "write once, run everywhere".

  • adverbly 3 hours ago

    /sarcastic

    This is how to keep simpletons out of your code base. Every numeric constant is defined in terms of a different lang quiz. Works well in JS as well of course.

      const DEFAULT_SELECTION = true + true
      const BASE_PRICE = 4 * parseInt(0.0000001)
      const BILLING_DAY_OF_MONTH = a++ + ++a
  • pplonski86 5 hours ago

    I love such puzzles! I used to use a lot ternary operators in C++ but one day friend of mine told me that I shouldn't nest ternary operators too much because code is too complicated to read - he understands code perfectly, he was just worried about younger programmers. Since then I started to use longer versions of code instead of smart shortcuts - to improve readability of code.

  • Tomte 2 days ago

    Some C++ quiz with ++a and a++? It‘s always about sequence points, or better the lack of sequence points.

    It‘s the standard technical C++ blog post everybody seems to write.

  • dhosek 3 hours ago

    I don’t have gcc available so I can’t test it, but I wonder what it does with

         int a = 5;
         int b = a++;
    
    if it gives b==5 in this circumstance (which I would say is the correct value), then it seems that giving 13 for a++ + ++a is a bug in the compiler. I kind of feel like giving 6 as an answer would also be a bug in the compiler since postfix-++ should return the old value and then increment.
    • compiler-guy 3 hours ago

      The trick here is that the original expression contains undefined behavior. Your example does not.

  • nnm 4 hours ago

    ++ should be banned, just like goto

  • comrade1234 5 hours ago

    The a=13 was most surprising to me but in retrospect obvious and amusing.

  • syngrog66 4 hours ago

    The smart nerd will know precisely how to decode that line's results.

    The wise nerd will not allow lines like it in their codebase, in the first place and, having seen one, will refactor it (probably involving more lines or parentheses) to make it more clear and easier to maintain.

    The latter approach scales better, in long run.

  • phendrenad2 5 hours ago

    Tried it on https://www.onlinegdb.com/online_c_compiler Returns 12. If I were designing C, it would return 13. But then again, I'm an assembly programmer.

    • topspin 5 hours ago

      Godbolt: clang and gcc compilers give 12. msvc compilers yield 13.

          #include <stdio.h>
      
          int main() {
              int a = 5;
              a = a++ + ++a;
              printf("%d\n", a);
              return 0;
          }
      
      x64 msvc v19.50 VS18.2 output:

          example.c
          ASM generation compiler returned: 0
          example.c
          Execution build compiler returned: 0
          Program returned: 0
          13
      
      x86-64 gcc 16.1 output:

          ASM generation compiler returned: 0
          Execution build compiler returned: 0
          Program returned: 0
          12
      
      armv8-a clang 22.1.0 output:

          <source>:5:10: warning: multiple unsequenced modifications to 'a' [-Wunsequenced]
              5 |     a = a++ + ++a;
                |          ^    ~~
          1 warning generated.
          ASM generation compiler returned: 0
          <source>:5:10: warning: multiple unsequenced modifications to 'a' [-Wunsequenced]
              5 |     a = a++ + ++a;
                |          ^    ~~
          1 warning generated.
          Execution build compiler returned: 0
          Program returned: 0
          12
  • summarybot 5 hours ago

    > If you would like to test your compiler (posting back the results in the comments is really appreciated, especially from strange/uncommon compilers and other languages which support pre- / post- increment ....

    Uh, 85% of them show the wrong result so 85% of them clearly do not support pre and post increment.

  • ecshafer 5 hours ago

    hmm surprising. I assumed it would be 12 since 5+5+1+1 doesn't really matter what order you do it in. But I suppose this really undefined behavior.

  • onlyrealcuzzo 4 hours ago

    Please tell me the answer is somehow 42!

    int a = 5; a = (++a * a++) + --a; a = ?

  • nDRDY 2 days ago

    Oh god. How long before yet another UB-based question ends up in technical coding interviews?

    • nananana9 5 hours ago

      The nice thing about these is that all answers are correct.

      • jbxntuehineoh 3 hours ago

        the correct answer is that the program will launch nethack, duh

      • gynvael 3 hours ago

        Haha this comment is spot on :)

  • nothrowaways 5 hours ago

    12