I made a real-time C/C++/Rust build visualizer

(danielchasehooper.com)

399 points | by dhooper a day ago ago

85 comments

  • Night_Thastus a day ago

    I am extremely interested in this.

    I am stuck in an environment with CMake, GCC and Unix Make (no clang, no ninja) and getting detailed information about WHY the build is taking so long is nearly impossible.

    It's also a bit of a messy build with steps like copying a bunch of files from the source into the build folder. Multiple languages (C, C++, Fortran, Python), custom cmake steps, etc.

    If this tool can handle that kind of mess, I'll be very interested to see what I can learn.

    • hagendaasalpine 15 hours ago

      Tsoding wrote https://github.com/tsoding/nob.h, single header C library for cross platform builds, only requirement is cc. GDB profiling tools can then be used to look at your build step. It's a neat idea. I suspect this is not an option but Nix is great build tool if you are dealing with multiple languages.

      • jppittma 12 hours ago

        Btw, he has a YouTube channel and streams. I recommend it if you’re seeking imposter syndrome.

    • unddoch a day ago

      I wrote a little GCC plugin for compile time tracing/profiling, if that's something you're interested in: https://github.com/royjacobson/externis

      • Night_Thastus 6 hours ago

        I just went ahead and tried it out :)

        I can get it to work for some sub-sets of our project, but for quite a bit of it I get the following error:

        cc1: error: cannot load plugin /opt/rh/gcc-toolset-13/root/usr/lib/gcc/x86_64-redhat-linux/13/plugin/externis.so: /opt/rh/gcc-toolset-13/root/usr/lib/gcc/x86_64-redhat-linux/13/plugin/externis.so: undefined symbol: _Z14decl_as_stringP9tree_nodei

        I suspect this is because these are C or Fortran sub-projects. I'm looking for some clean way to tell Cmake to apply externis to all the C++ only subprojects if possible. I'll see what I can come up with.

        I'd also like to know, if multiple GCC commands end up pointing to the same trace.json, especially in a parallel build, will externis automagically ensure that it doesn't step over itself?

      • Night_Thastus 4 hours ago

        I figured out a way to set it at a top level, so it only happens with C++ files:

        target_compile_options(${NAME} PUBLIC

        $<$<COMPILE_LANGUAGE:CXX>:-fplugin=externis -fplugin-arg-externis-trace-dir=(where I want to put traces)>

        )

        But as I suspected, it is not a single trace file. It's thousands of trace files. Is there some way to collate all the data into one larger picture of how the build progressed?

    • phaedrus a day ago

      When I was trying to improve compile time for my game engine, I ended up using compiled size as a proxy measure. Although it is an imperfect correlation, the fact that compiled size is deterministic across build runs and even across builds on different machines makes it easier to work with than wall clock time.

      • mlsu a day ago

        Wait, this is not intuitive at all for me.

        If the compiler is working harder wouldn't that result in a more compact binary? Maybe I'm thinking too much from an embedded software POV.

        I suppose the compiler does eventually do IO but IO isn't really the constraint most of the time right?

        • staticfloat a day ago

          While you can cause the compiler to run longer to squeeze the binary size down, the compiler has a baseline number of compiler passes that it runs over the IR of the program being compiled. These compiler passes generally take time proportional to the input IR length, so a larger program takes longer to compile. Most compiler passes aren't throwing away huge amounts of instructions (dead code elimination being a notable exception, but the analysis to figure out which pieces of dead code can be eliminated still is operating on the input IR). So it's not a perfect proxy, but in general, if the output of your compiler is 2MB of code, it probably took longer to process all the input and spit out that 2MB than if the output of your compiler was 200KB.

        • johannes1234321 a day ago

          Of course there are the cases where a huge template structure with complex instantiation and constexpr code compiles down to a single constant, but for most parts of the code I would assume there is a proportion from code size, via compile time to binary size.

    • fransje26 12 hours ago

      > I am stuck in an environment with CMake, GCC and Unix Make (no clang, no ninja) and getting detailed information about WHY the build is taking so long is nearly impossible.

      I have a similar problem, with a tangential question that I think about from time to time without really having the time to investigate it further, unfortunately.

      I notice sometimes that CMake recompiles files that shouldn't have been affected by the code changes made previously. Like recompiling some independent objects after only slight changes to a .cpp file without any interface changes.

      So I often wonder if CMake is not making some file more inter-dependent than what they are, leading to longer compilation times.

    • a day ago
      [deleted]
    • 1718627440 a day ago

      Can you set CC=time gcc ?

    • wakawaka28 a day ago

      It's not "nearly impossible" but actually built in: https://cmake.org/cmake/help/latest/manual/cmake.1.html#cmdo... For the actual compile time you can easily insert a wrapper script. To be honest I haven't done that in over 4 years, but it has been done by many and it is easy.

      There may be times when CMake itself is the bottleneck but it is almost certainly an issue with your dependencies and so on. CMake has many features to assist you in speeding up your compile and link time too. But it would take a series of blog posts to describe how you should try to speed it up.

      • Night_Thastus 6 hours ago

        Does that just profile cmake's time to configure and generate, or the underlying compile of each file as well? Configuration and generation are only seconds when done from scratch - the build is more like 20 minutes.

        Just trying to add that argument with 3.26.5 on Rocky Linux 9, I get 'Unknown argument --profiling-format=google-trace'.

        Not sure why, as cmake --help clearly states it should be there...

          --profiling-format=<fmt>     = Output data for profiling CMake scripts.
                                         Supported formats: google-trace
          --profiling-output=<file>    = Select an output path for the profiling data
                                         enabled through --profiling-format.
        • tom_ 5 hours ago

          You might need to specify --profiling-output= as well. I get an error fdrom cmake 3.31 if only the format is provided: CMake Error: --profiling-format specified but no --profiling-output!

          Anyway, it looks like it only profiles the configure/generate steps. Not much use on Linux, but on Windows/macOS, perhaps. Due to lack of any standard package manager, it's a good idea to build every dependency from source on those OSs, and the time can mounts up.

          My project is not that large, but it takes 1 minute to configure from scratch on Windows, and 10 minutes (!) on macOS.

          • Night_Thastus 5 hours ago

            I specified both. In any case, yeah, cmake time isn't useful in my case.

            Bizarre that you see 10 minutes on MacOS. Something's definitely busted there. It's not even that bad for me on Windows, and that's saying something.

            • tom_ 2 hours ago

              For the record: the cmake profiling data did help a bit. On macOS it seems that the configure stage does 200+ try_compiles, largely due to SDL2, and each one somehow takes ~2.3 seconds. So that's 460 seconds right there. And regarding the total time, it's actually more like 480 seconds. (I must have misremembered! Or perhaps my laptop is measurably faster when its integrated GPU isn't driving 2 external displays.)

              On Windows: 50 try_compiles, about half and half SDL2 and libuv, and each one takes ~1 second.

              I don't think either of these try_compile turnaround times is acceptable (what is it doing?! I bet it's like 50 ms on Linux) but the total figure does now feel a bit less mysterious.

    • pklausler a day ago

      strace might help, if you have it.

  • Mawr a day ago

    Suggestion to the blog author - put:

    > Here it is recording the build of a macOS app:

    > <gif>

    At the top of the page, it should be right under the header.

    You made a thing, so show the thing. You can waffle on about it later. Just show the thing.

    • dhooper a day ago

      Good suggestion. Updated.

      • hdjrudni 20 hours ago

        Good job. This caught my eye. I don't think the visual even needs much of an explanation, I can see what it's doing.

  • entelechy a day ago

    Love it! We did something similar using strace/dtruss back in 2018 with https://buildinfer.loopperfect.com/ and were generating graphs (using eg. graphviz and perfetto.dev) and BUCK files on the back of that

    Whilst we regrettably never came around to package it as a propper product, we found it immensly valuable in our consulting work, to pinpoint issues and aid the conversion to BUCK/Bazel. We used graphviz, https://perfetto.dev/ and couple other tools to visualise things

    Recently we cicled back to this too but with a broader usecase in mind.

    There are some inherent technical challanges with this approach & domain:

    - syscall logs can get huge - especially when saved to disk. Our strace logs would get over 100GB for some projects (llvm was around ~50GB)

    - some projects also use https and inter process communications and that needs ot be properly handled too. (We even had a customer that was retriving code from a firebird database via perl as part of the compilation step!)

    - It's runtime analysis - you might need to repeat the analysis for each configuration.

    • flakes 16 hours ago

      Curious, what were you using for doing syscall logging? LD_PRELOAD tricks, or ebpf filtering?

  • bgirard a day ago

    That's really cool. Fascinating to think about all the problems that get missed due to poor or missing visualizations like this.

    I did a lot of work to improve the Mozilla build system a decade ago where I would have loved this tool. Wish they would have said what problem they found.

    • dhooper a day ago

      (OP here) Thanks!

      My call with the Mozilla engineer was cut short, so we didn't have time to go into detail about what he found, I want to look into it myself.

      • bvisness a day ago

        Hello, I am the engineer in question. I am not actually super familiar with the details of the build system, but from when I saw, the main issues were:

        - Lots of constant-time slowness at the beginning and end of the build

        - Dubious parallelism, especially with unified builds

        - Cargo being Cargo

        Overall it mostly looks like a soup of `make` calls with no particular rhyme or reason. It's a far cry from the ninja example the OP showed in his post.

        • epage 11 hours ago

          What `cargo being cargo` problems are you having?

  • bdash 20 hours ago

    I've had success using https://github.com/nico/ninjatracing along with Clang's `-ftime-trace` to visualize the build performance of a C++ project using CMake. https://github.com/aras-p/ClangBuildAnalyzer helps further break down what the compiler is spending its time on.

  • tom_ a day ago

    If you use the Visual C++ compiler on Windows, vcperf is worth a look: https://github.com/microsoft/vcperf - comes with VS2022, or you can build from github.

    I've used it with projects generated by UBT and CMake. I can't remember if it provides any info that'd let you assess the quality of build parallelism, but it does have some compiler front end info which is pretty straightforward to read. Particularly expensive headers (whether inherently so, or just because they're included a lot) are easy to find.

    • muststopmyths 9 hours ago

      Also Incredibuild. The free version is probably good enough to visualize your build and see any bottlenecks.

  • torarnv 15 hours ago

    Awesome!! Are you planning to open source this? I’ve been working on something similar and would love to join forces!

  • boris a day ago

    > It also has 6 seconds of inactivity before starting any useful work. For comparison, ninja takes 0.4 seconds to start compiling the 2,468,083 line llvm project. Ninja is not a 100% fair comparison to other tools, because it benefits from some “baked in” build logic by the tool that created the ninja file, but I think it’s a reasonable “speed of light” performance benchmark for build systems.

    This is an important observation that is often overlooked. What’s more, the changes to the information on which this “baked in” build logic is based is not tracked very precisely.

    How close can we get to this “speed of light” without such “baking in”? I ran a little benchmark (not 100% accurate for various reasons but good enough as a general indication) which builds the same project (Xerces-C++) both with ninja as configured by CMake and with build2, which doesn’t require a separate step and does configuration management as part of the build (and with precise change tracking). Ninja builds this project from scratch in 3.23s while build2 builds it in 3.54s. If we omit some of the steps done by CMake (like generating config.h) by not cleaning the corresponding files, then the time goes down to 3.28s. For reference, the CMake step takes 4.83s. So a fully from-scratch CMake+ninja build actually takes 8s, which is what you would normally pay if you were using this project as a dependency.

    • remexre a day ago

      > What’s more, the changes to the information on which this “baked in” build logic is based is not tracked very precisely.

      kbuild handles this on top of Make by having each target depend on a dummy file that gets updated when e.g. the CFLAGS change. It also treats Make a lot more like Ninja (e.g. avoiding putting the entire build graph into every Make process) -- I'd be interested to see how it compares.

  • saagarjha 18 hours ago

    I've done something similar by running Instruments during the build, which not only tells me which processes are running when but also what they're doing. Unfortunately Instruments gets upset if your build takes a long time, and it doesn't really allow filtering by process tree, but it helped shipped several major wins for our builds when I was working on Twitter's iOS codebase. Alas trying to this these days will not work because Instrument's "All Processes" tracing has been broken for a while (FB14533747).

  • aanet a day ago

    This is fabulous!!

    Is there a version available for MacOS today?? I'd love to give it a whirl... For Rust, C++ / Swift and other stuff.

    Thanks!

    • dhooper a day ago

      I'll be sending out the a macOS version to another wave of beta users after I fix an outstanding issue, if you sign up (at bottom of article) and mention this comment I can make sure you're in that wave.

      • aanet a day ago

        Thanks. Signed up

    • Night_Thastus a day ago

      It looks like it doesn't have a public release for any OS yet, but has a way to enter for early access.

  • pjmlp 15 hours ago

    Great piece of work.

    Without trying to devalue it, note that VS and XCode have similar visualization tools.

  • JackYoustra 21 hours ago

    For anyone using xcode: there's a builtin button to show a visualization for a build (not realtime afaik) for it too.

  • terabytest 9 hours ago

    It looks really nice. I wonder if it’d be possible to break it down even further by somehow instrumenting the actual processes and including their execution flame graphs as part of the chart. That would expose a ton of extra information about the large gaps of “inactivity”.

  • audiofish a day ago

    Really cool tool, but perhaps not for the original use-case. I often find myself trying to figure out what call tree a large Bash script creates, and this looks like it visualises it well.

    This would have been really useful 6 months ago, when I was trying to figure out what on earth some Jetson tools actually did to build and flash an OS image.

  • tempodox 16 hours ago

    I would be interested and even pay for it, but I am not a joiner. And I don't want Google to know any of my email addresses.

  • xuhu a day ago

    Is there a tool that records the timestamp of each executed command during a build, and when you rebuild, it tells you how much time is left instead of "building obj 35 out of 1023" ?

    Or (for cmake or ninja) use a CSV that says how long each object takes to build and use it to estimate how much is left ?

    • dhooper a day ago

      OP Here. Thats an interesting idea. What The Fork knows all the commands run, and every path they read/write, so I should be able to make it estimate build time just by looking at what files were touched.

  • tiddles a day ago

    Nice, I’ve been looking for something like this for a while.

    I’ve noticed on my huge catkin cmake project that cmake is checking the existence of the same files hundreds of times too. Is there anything that can hook into fork() and provide a cached value after the first invocation?

    • lights0123 a day ago

      My tips for speeding up builds (from making this same project but with ebpf):

      - switch to ninja to avoid that exact issue since CMake + Make spawns a subprocess for every directory (use the binary from PyPi for jobserver integration)

      - catkin as in ROS? rm /opt/ros/noetic/etc/catkin/profile.d/99.roslisp.sh to remove 2 python spawns per package

    • ethan_smith 15 hours ago

      You could try ccache with the CCACHE_SLOPPINESS=file_stat_matches option, or implement a filesystem-level caching proxy like CachingFS or FUSE-based solutions that intercept and cache those redundant stat() calls.

  • supportengineer a day ago

    Amazing! Great job!

    What limits your tool to compiler/build tools, can it be used for any arbitrary process?

    • dhooper a day ago

      Thank you! Yeah it can be used for any type of program, but I haven't been able to think of anything besides compilation that creates enough processes to be interesting. I'm open to ideas!

      • DiddlyWinks a day ago

        Video encoding and 3-D rendering are a couple that come to mind; I'd think they'd launch quite a few.

        This looks like a really cool tool!

        • shakna 16 hours ago

          Just as a random example in the area, I had a project where I transformed every frame in a video, using a custom binary, before encoding them back to the video.

          Hundreds of thousands of processes were normal.

  • lsuresh 7 hours ago

    Would love to have our team try this out (we have some ridiculous rust builds).

  • epage a day ago

    How does this compare to `cargo check --timings`?

    It visualizes each crate's build, shows the dependencies between them, shows when the initial compilation is done that unblocks dependents, and soon will have link information.

  • proctorg76 a day ago

    the parallels between tech and manufacturing never cease to amaze, this looks so much like the machine monitoring / execution system we use in the car parts plant I want to ask if you've calculated the TEEP and OEE of your build farm

  • rustystump a day ago

    10/10 this is very cool and the kind of hacking i come here for

  • bitbasher 19 hours ago

    This is cool, but for Rust you have `cargo build --timings` built in and has even more details.

  • corysama a day ago

    Looks like a general `fork()` visualizer to me. Which is great!

    • ItsHarper 8 hours ago

      I like that the name reflects its broad usefulness

  • time4tea a day ago

    Then I hid it away.

    This is an ad not a helpful announcement.

  • emigre 11 hours ago

    Really nice and interesting! Thanks!

  • CyberDildonics a day ago

    I love the visualization, I think it's great information and will be very helpful to whoever uses it.

    I would think about a different name. Often names are either meant to be funny or just unique nonsense but something short and elegantly descriptive (like BuildViz etc.) can go a long way to making it seem more legitimate and being more widely used.

    • dhooper a day ago

      Thanks CyberDildoNics!

      • 1718627440 8 hours ago

        I'm interested to try, but don't get what that early access signup is about. Where is the email address going to show up? Can I use a temp mail?

        • dhooper 7 hours ago

          I used the term "early access" when I should've used "private beta". The sign up is just for me to have an email addresses to send the private beta when the next update goes out, and I'll follow up for feedback. Nothing sinister is happening.

          • 1718627440 6 hours ago

            > to send the private beta when the next update goes out

            Meaning I only can try the software after sign up, or did I miss the obvious repo link?

            What language is this project written in, what build system does it use itself?

            I just don't feel comfortable pasting my email address into a Google site. I can use a temp mail, but I will loose access to it in a few minutes/hours, so I don't know if that would annoy you.

            What kind of times do you expect in the form, serial or parallel build time? What kind of file do you want to be modified. When I modify main.c, basically nothing gets rebuild, when I modify the central header file, it will be like a total rebuild. Can you clarify that in the form?

            • zorgmonkey an hour ago

              You can make an email you don't care about with protonmail, I recommend them cause they don't require you to enter an existing email address or a phone number when signing up.

    • hiccuphippo a day ago

      Name checks out.

  • forrestthewoods a day ago

    This is great. I was skeptical from the title but the implementation is very clever. This could be a super super useful tool for the industry.

  • mrlonglong a day ago

    Does it work with cmake?

    • ItsHarper 8 hours ago

      A CMake build is used as an example in the blog post. It's literally just visualizing all spawned processes, so it will work for anything that spawns subprocesses to do the build.

    • MBCook 20 hours ago

      There is an example cmake graph in the article.

  • secondcoming 8 hours ago

    This looks awesome.

    I've used clang's -ftime-trace option in the past and that's also really good. It's a pity gcc has nothing similar.

  • Cloudef a day ago

    LLVM is taking its sweet time, brew coffee

  • mgaunard a day ago

    The real solution is to eliminate build systems where you have to define your own targets.

    Developers always get it wrong and do it badly.

  • jeffbee a day ago

    This seems like a good place to integrate a Bazel Build Event Protocol stream consumer.

    • MathMonkeyMan a day ago

      I was going to comment that "what the fork" might not work for a client/server build system like Bazel, but now I have something to google instead.

  • kirito1337 a day ago

    That's interesting.

  • brcmthrowaway a day ago

    What about OSes that dont use fork()?

    • dhooper a day ago

      I use whatever the equivalent is on that OS.

  • metalliqaz a day ago

    Isn't `wtf` already a fairly common command?

  • Surac a day ago

    but why? I have to admit it's a fun project

    • rvrb a day ago

      here, I'll copy the first paragraph of TFA for you:

      > Many software projects take a long time to compile. Sometimes that’s just due to the sheer amount of code, like in the LLVM project. But often a build is slower than it should be for dumb, fixable reasons.

  • klik99 a day ago

    Nice! Leaving a comment to easily find this later, dont have anything to add except this looks cool