167 comments

  • Rochus 10 hours ago

    So what we need is essentially a "libc virtualization".

    But Musl is only available on Linux, isn't it? Cosmopolitan (https://github.com/jart/cosmopolitan) goes further and is available also on Mac and Windows, and it uses e.g. SIMD and other performance related improvements. Unfortunately, one has to cut through the marketing "magic" to find the main engineering value; stripping away the "polyglot" shell-script hacks and the "Actually Portable Executable" container (which are undoubtedly innovative), the core benefit proposition of Cosmopolitan is indeed a platform-agnostic, statically-linked C standard (plus some Posix) library that performs runtime system call translation, so to say "the Musl we have been waiting for".

    • drowsspa 6 hours ago

      I find it amazing how much the mess that building C/C++ code has been for so many decades seems to have influenced the direction technology, the economy and even politics has been going.

      Really, what would the world look like if this problem had been properly solved? Would the centralization and monetization of the Internet have followed the same path? Would Windows be so dominant? Would social media have evolved to the current status? Would we have had a chance to fight against the technofeudalism we're headed for?

      • AshamedCaptain 2 hours ago

        What I find amazing is why people continously claim glibc is the problem here. I have a commercial software binary from 1996 that _still works_ to this day. It even links with X11, and works under Xwayland.

        The trick? It's not statically linked, but dynamically linked. And it doesn't like with anything other than glibc, X11 ... and bdb.

        At this point I think people just do not know how binary compatibility works at all. Or they refer to a different problem that I am not familiar with.

        • marcosdumay an hour ago

          The problem of modern libc (newer than ~2004, I have no idea what that 1996 one is doing) isn't that old software stops working. It's that you can't compile software on your up to date desktop and have it run on your "security updates only" server. Or your clients "couple of years out of date" computers.

          And that doesn't require using newer functionality.

          • AshamedCaptain an hour ago

            But this is not "backwards compatibility". No one promises this type of "forward compatibility" that you are asking for . Even win32 only does it exceptionally... maybe today you can still build a win10 binary with a win11 toolchain, but you cannot build a win98 binary with it for sure.

            And this has nothing to do with 1996, or 2004 glibc at all. In fact, glibc makes this otherwise impossible task actually possible: you can force to link with older symbols, but that solves only a fraction of the problem of what you're trying to achieve. Statically linking / musl does not solve this either. At some point musl is going to use a newer syscall, or any other newer feature, and you're broke again.

            Also, what is so hard about building your software in your "security updates only" server? Or a chroot of it at least ? As I was saying below, I have a Debian 2006-ish chroot for this purpose....

            • marcosdumay 30 minutes ago

              Windows dlls are forward compatible in that sense. If you use the Linux kernel directly, it is forward compatible in that sense. And, of course, there is no issue at all with statically linked code.

              The problem is with the Linux dynamic linking, and the idea that you must not statically link the glibc code. And you can circumvent it by freezing your glibc abstraction interface, so that if you need to add another function, you do so by making another library entirely. But I don't know if musl does that.

              • AshamedCaptain 21 minutes ago

                > Windows dlls are forward compatible in that sense.

                If you want to go to such level, ELF is also forward compatible in that sense.

                This is completely irrelevant, because what the developer is going to see is the binaries he builts in XP SP3 no longer work in XP SP2 because of a link error: the _statically linked_ runtime is going to call symbols that are not in XP SP2 DLLs.

                > If you use the Linux kernel directly, it is forward compatible in that sense.

                Or not, because there will be a note in the ELF headers with the minimum kernel version required, which is going to be set to a recent version even if you do not use any newer feature. (unless you play with the toolchain) (PE has similar field too, leading to the "not a valid win32 executable" messages).

                > And, of course, there is no issue at all with statically linked code.

                I would say statically linked code is precisely the root of all these problems.

        • markus92 2 hours ago

          We (small HPC system) just upgraded our OS from RHEL 7 to RHEL 9. Most user apps are dynamically linked, too.

          You don't want to believe how many old binaries broke. Lot of ABI upgrades like libpng, ncurses, heck even stuff like readline and libtiff all changed just enough for linker errors to occur.

          Ironically all the statically compiled stuff was fine. Some small things like you mention only linking to glibc and X11 was fine too. Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.

          But yeah, now that I'm writing this out, glibc was never the problem in terms of forwards compatibility. Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...

          • AshamedCaptain 2 hours ago

            > Funnily enough grabbing some old .so files from the RHEL 7 install and dumping them into LD_LIBRARY_PATH also worked better than expected.

            Why "better than expected"? I can run the entire userspace from Debian Etch on a kernel built two days ago... some kernel settings need to be changed (because of the old glibc! but it's not glibc's fault: it's the kernel who broke things), but it works.

            > Now running stuff compiled on modern Ubuntu or RHEL 10 on the older OS, now that's a whole different story...

            But this is a different problem, and no one makes promises here (not the kernel, not musl). So all the talk of statically linking with musl to get such type of compatibility is bullshit (at some point, you're going to hit a syscall/instruction/whatever that the newer musl does that the older kernel/hardware does not support).

      • joshmarinacci 5 hours ago

        How does this technical issue affect the economy and politics? In what way would the world be different just because we used a better linker?

        • nacozarina 2 hours ago

          existential crisis: so hot right now

    • Conscat 3 hours ago

      If the APE concept isn't appealing to you, you may be interested in the work on LLVM libC. My friend recently delivered an under-appreciated lecture on the vision:

      https://youtu.be/HtCMCL13Grg

      tl;dw Google recognizes the need for a statically-linked modular latency sensitive portable POSIX runtime, and they are building it.

    • sidewndr46 9 hours ago

      At the rate things are going we'll need a container virtualization layer as well, a docker for docker if you know what I mean

      • binsquare 7 hours ago

        I'm building in this space, I take a docker inside a microvm (vm-lite) approach.

        https://github.com/smol-machines/smolvm

        • cwillu 6 hours ago

          And the cycle continues

          • sidewndr46 an hour ago

            I wonder if inside the docker container we can run a sandboxed WASM runtime?

          • binsquare 5 hours ago

            It's just fun ;)

      • miduil 8 hours ago

        Do you mean something like gVisor?

      • rafale 8 hours ago

        "All problems in computer science can be solved by another level of indirection"

        • johndough 7 hours ago

          "... except for the problem of too many levels of indirection."

        • Rochus 7 hours ago

          ad infinitum ;-)

    • VikingCoder 8 hours ago

      I desperately want to write C/C++ code that has a web server and can talk websockets, and that I can compile with Cosmopolitan.

      I don't want Lua. Using Lua is crazy clever, but it's not what I want.

      I should just vibe code the dang thing.

  • amelius 13 hours ago

    Is there a tool that takes an executable, collects all the required .so files and produces either a static executable, or a package that runs everywhere?

    • TheDong 12 hours ago

      There are things like this.

      The things I know of and can think of off the top of my head are:

      1. appimage https://appimage.org/

      2. nix-bundle https://github.com/nix-community/nix-bundle

      3. guix via guix pack

      4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )

      5. A docker image (a package that runs everywhere, assuming a docker runtime is available)

      6. https://flatpak.org/

      7. https://en.wikipedia.org/wiki/Snap_(software)

      AppImage is the closest to what you want I think.

      • a022311 9 hours ago

        It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods and also very big for typical systems which include most libraries. They're good as a "compile once, run everywhere" approach but you're really accommodating edge cases here.

        A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?

        Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.

        • saghm 6 hours ago

          IMO one of the best features of AppImage is that it makes it easy to extract without needing external tools. It's usually pretty easy for me to look at an AppImage and write a PKGBUILD to make a native Arch package; the format already encodes what things need to be installed where, so it's only a question of whether the libraries it contains are the same versions of what I can pull in as dependencies (either from the main repos or the AUR). If they are, my job is basically already done, and if they aren't, I can either choose to include them in the package itself assuming I don't have anything conflicting (which is fine for local use even if it's not something that's usually tolerated when publishing a package) or stick with using the AppImage.

          • a022311 5 hours ago

            I agree. I've seen quite a few AUR packages built that way and I'm using a few myself too. The end user shouldn't be expected to do this though! :D

        • badsectoracula 4 hours ago

          > It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods

          'Noticeably slower' at what? I've run, e.g. xemu (original xbox emulator) as both manually built from source and via AppImage-based released and i never noticed any difference in performance. Same with other AppImage-based apps i've been using.

          Do you refer to launching the app or something like that? TBH i cannot think of any other way an AppImage would be "slower".

          Also from my experience, applications released using AppImages has been the most consistent by far at "just working" on my distro.

      • gilli 11 hours ago

        I wish AppImage was slightly more user friendly and did not require the user to specifically make it executable.

        • VadimPR 11 hours ago

          We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.

          Been doing it this way for years now, so it's well battle tested.

          • account42 10 hours ago

            That kind of defeats the point of an AppImage though - you could just as well have a tar archive with a c classic collection of binaries + optional launcher script.

      • amelius 12 hours ago

        AppImage looks like what I need, thanks.

        I wonder though, if I package say a .so file from nVidia, is that allowed by the license?

        • ValdikSS 8 hours ago

          AppImage is not what you need. It's just an executable wrapper for the archive. To make the software cross-distro, you need to compile it manually on an old distro with old glibc, make sure all the dependencies are there, and so on.

          https://docs.appimage.org/reference/best-practices.html#bina...

          There are several automation tools to make AppImages, but they won't magically allow you to compile on the latest Fedora and expect your executable to work on Debian Stable. It's still require quite a lot of manual labor.

          • ndiddy 6 hours ago

            Yeah a lot of Appimage developers make assumptions about what their systems have as well (i.e. "if I depend on something that is installed by default on Ubuntu desktop then it's fine to leave out"). For example, a while ago I installed an Appimage GUI program on a headless server that I wanted to use via X11 forwarding. I ended up having to manually install a bunch of random packages (GTK stuff, fonts, etc) to get it to run. I see Appimage as basically the same as distributing Linux binaries via .tar.gz archives, except everything's in a single file.

        • saidinesh5 an hour ago

          Typically appimage packaging excludes the .so files that are expected to be provided by the base distro.

          Any .so from nvidia is supposed to be one of those things. Because it also depends on the drivers etc.. provided by nvidia.

          Also on a side note, a lot of .so files also depends on other files in /usr/share , /etc etc...

          I recommend using an AppImage only for the happy path application frameworks they support (eg. Qt, Electron etc...). Otherwise you'd have to manually verify all the libraries you're bundling will work on your user's distros.

        • ValdikSS 8 hours ago

          >I wonder though, if I package say a .so file from nVidia, is that allowed by the license?

          It won't work: drivers usually require exact (or more-or-less the same) kernel module version. That's why you need to explicitly exclude graphics libraries from being packaged into AppImage. This make it non-runnable on musl if you're trying to run it on glibc.

          https://github.com/Zaraka/pkg2appimage/blob/master/excludeli...

        • mdavid626 11 hours ago

          Don't forget - AppImage won't work if you package something with glibc, but run on musl/uclibc.

        • direwolf20 11 hours ago

          No, that's a copyright violation, and it won't run on AMD or Intel GPUs, or kernels with a different Nvidia driver version.

          • amelius 10 hours ago

            But this ruins the entire idea of packaging software in a self-contained way, at least for a large class of programs.

            It makes me wonder, does the OS still take its job of hardware abstraction seriously these days?

            • holowoodman 10 hours ago

              The OS does. Nvidia doesn't.

              • direwolf20 10 hours ago

                Does Nvidia not support OpenGL?

                • holowoodman 9 hours ago

                  Not really. Nvidia-OpenGL is incompatible to all existing OS OpenGL interfaces, so you need to ship a separate libGL.so if you want to run on Nvidia. In some cases you even need separate binaries, because if you dynamically link against Nvidia's libGL.so, it won't run with any other libGL.so. Sometimes also vice versa.

                  • direwolf20 9 hours ago

                    Does AMD use a statically linked OpenGL?

                    • holowoodman 8 hours ago

                      AMD uses the dynamically linked system libGL.so, usually Mesa.

                      • direwolf20 6 hours ago

                        So you still need dynamic linking to load the right driver for your graphics card.

                        • holowoodman 6 hours ago

                          Most stuff like that uses some kind of "icd" mechanism that does 'dlopen' on the vendor-specific parts of the library. Afaik neither OpenGL nor Vulkan nor OpenCL are usable without at least dlopen, if not full dynamic linking.

            • direwolf20 10 hours ago

              It does, and one way it does that is by dynamically loading the right driver code for your hardware.

            • maccard 9 hours ago

              That’s a licensing problem not a packaging problem. A DLL is a DLL - only thing that changes is whether you’re allowed redistribute it

        • c0balt 12 hours ago

          Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.

          You generally still also have to abide by license obligations for OSS too, e. G., GPL.

          To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.

    • Conscat 3 hours ago

      Someone already mentioned AppImage, but I'd like to draw attention to this alternate implementation that executes as a POSIX shell script, making it possible to dynamic dispatch different programs on different architectures. e.g. a fat binary for ARM and x64.

      https://github.com/mgord9518/shappimage

      • sekh60 3 hours ago

        So autotools but for execution instead of compilation?

    • lizknope 10 hours ago

      15-30 years ago I managed a lot of commercial chip design EDA software that ran on Solaris and Linux. We had wrapper shell scripts for so many programs that used LD_LIBRARY_PATH and LD_PRELOAD to point to the specific versions of various libraries that each program needed. I used "ldd" which prints out the shared libraries a program uses.

    • alas44 10 hours ago

      There is this project "actually portable executable"/cosmopolitan libc https://github.com/jart/cosmopolitan that allows a compile once execute anywhere style type of C++ binary

    • fieu 11 hours ago

      Ermine: https://www.magicermine.com/

      It works surprisingly well but their pricing is hidden and last time I contacted them as a student it was upwards of $350/year

    • saghm 6 hours ago

      I don't think it's as simple as "run this one thing to package it", so if the process rather than the format is what you're looking for, this won't work, but that sounds a lot like how AppImages work from the user perspective. My understanding is that an AppImage is basically a static binary paired with a small filesystem image containing the "root" for the application (including the expected libraries under /usr/lib or wherever they belong). I don't line everything about the format, but overall it feels a lot less prescriptive than other "universal" packages like flatpak or snap, and the fact that you can easily extract it and pick out the pieces you want to repackage without needing any external tools (there are built-in flags on the binary like --appimage-extract) in helps a lot.

    • jcalvinowens 6 hours ago

        mkdir chroot
        cd chroot
        for lib in $(ldd ${executable} | grep -oE '/\S+'); do
          tgt="$(dirname ${lib})"
          mkdir -p .${tgt}
          cp ${lib} .${tgt}
        done
        mkdir -p .$(dirname ${executable})
        cp ${executable} .${executable}
        tar cf ../chroot-run-anywhere.tgz .
      • saidinesh5 an hour ago

        You're supposed to do this recursively for all the libs no?

        Eg. Your App might just depend on libqt5gui.so but that libqt5gui.so might depend on some libxml etc...

        Not to mention all the files from /usr/share etc... That your application might indirectly depend on.

    • mdavid626 12 hours ago

      You can "package" all .so files you need into one file, there are many tools which do this (like a zip file).

      But you can't take .so files and make one "static" binary out of them.

      • geocar 9 hours ago

        > But you can't take .so files and make one "static" binary out of them.

        Yes you can!

        This is more-or-less what unexec does

        - https://news.ycombinator.com/item?id=21394916

        For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.

        But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!

        [1]: ASLR would be one of those things...

        • mdavid626 6 hours ago

          What if the library you use calls dlopen later? That’ll fail.

          There is no universal, working way to do it. Only some hacks which work in some special cases.

          • geocar 3 hours ago

            > What if the library you use calls dlopen later? That’ll fail.

            Nonsense. xemacs could absolutely call dlopen.

            > There is no universal, working way to do it. Only some hacks which work in some special cases.

            So you say, but I remember not too long ago you weren't even aware it was possible, and you clearly didn't check one of the most prominent users of this technique, so maybe you should also explain why I or anyone else should give a fuck about what you think is a "hack"?

      • fc417fc802 11 hours ago

        Well not a static binary in the sense that's commonly meant when speaking about static linking. But you can pack .so files into the executable as binary data and then dlopen the relevant memory ranges.

        • mdavid626 10 hours ago

          Yes, that's true.

          But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.

          These are just strange and confusing from the end users' perspective.

          • toast0 6 hours ago

            > But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.

            Why would you include most of your dynamic libraries but not your libc?

            You could still run into problems if you (or your libraries) want to use syscalls that weren't available on older kernels or whatever.

            • mdavid626 6 hours ago

              You can include it, but

              - either you use chroot, proot or similar to make /lib path contain your executable’s loader

              - or you hardcode different loader path into your executable

              Both are difficult for an end user.

              • toast0 5 hours ago

                This isn't that hard (that's not to say this is easy, it is tricky). Your executable should be a statically linked stub loader with an awful lot of data, the stub loader dynamically links your real executable (and libraries, including libc) from the data and runs it.

    • ryan-c 8 hours ago

      (not an endorsement, I do not use it, but I know of it)

      https://www.magicermine.com/

    • secure 10 hours ago

      https://github.com/gokrazy/freeze is a minimal take on this

    • formerly_proven 12 hours ago

      I don't think you can link shared objects into a static binary because you'd have to patch all instances where the code reads the PLT/GOT, but this can be arbitrarily mangled by the optimizer, and turn them back into relocations for the linker to then resolve them.

      You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.

      edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.

    • aa-jv 12 hours ago

      AppImage comes close to fulfilling this need:

      https://appimage.github.io/appimagetool/

      Myself, I've committed to using Lua for all my cross-platform development needs, and in that regard I find luastatic very, very useful ..

  • surajrmal 8 hours ago

    Binary comparability extends beyond the vide that runs in your process. These days a lot of functionality occurs by way of IPC which has a variety of wire protocols depending on the interface. For instance there is dbus, Wayland protocols, varlink, etc. Both the wire protocol, and the APIs built on top need to retain backwards comparability to ensure Binary compatibility. Otherwise you're not going to be able to run on various different Linux based platforms arbitrarily. And unlike the kernel, these userspace surfaces do not take backwards compatibility nearly as important. It's also much more difficult to target a subset of these APIs that are available on systems that are only 5 years old. I would argue API endpoints on the web have less risk here (although those break all the time as well)

  • dunder_cat 6 hours ago

    Related discussion (the actual project is mentioned in the issue): "Detour: Dynamic linking on Linux without Libc" https://news.ycombinator.com/item?id=45740241

  • ValdikSS 8 hours ago

    `dlopen`'ing system libraries is an "easy" hack to try to maintain compatibility with wide variety of libraries/ABIs. It's barely used (I know only of SDL, Small HTTP Server, and now Godot).

    Without dlopen (with regular dynamic linking), it's much harder to compile for older distros, and I doubt you can easily implement glibc/musl cross-compatibility at all in general.

    Take a look what Valve does in a Steam Runtime:

        - https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/docs/pressure-vessel.md
        - https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/subprojects/libcapsule/doc/Capsules.txt
  • athrowaway3z 12 hours ago

    I'd never heard of detour. That's a pretty cool hack.

  • mgaunard 11 hours ago

    It's funny how people insist on wanting to link everything statically when shared libraries were specifically designed to have a better alternative.

    Even worse is containers, which has the disadvantage of both.

    • arghwhat 11 hours ago

      Dynamic libraries have been frowned upon since their inception as being a terrible solution to a non-existent problem, generally amplifying binary sizes and harming performance. Some fun quotes of quite notable characters on the matter here: https://harmful.cat-v.org/software/dynamic-linking/

      In practice, a statically linked system is often smaller than a meticulously dynamically linked one - while there are many copies of common routines, programs only contain tightly packed, specifically optimized and sometimes inlined versions of the symbols they use. The space and performance gain per program is quite significant.

      Modern apps and containers are another issue entirely - linking doesn't help if your issue is gigabytes of graphical assets or using a container base image that includes the entire world.

      • holowoodman 9 hours ago

        Statically linked binaries are a huge security problem, as are containers, for the same reason. Vendors are too slow to patch.

        When dynamically linking against shared OS libraries, Updates are far quicker and easier.

        And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...

        • adrian_b 7 hours ago

          This is the theory, but not the practice.

          In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.

          On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications, not only on Linux, but even on Windows and even for Microsoft products, such as Visual Studio.

          I have also seen a lot of space and time wasted by the necessity of having installed in the same system, by using various hacks, a great number of versions of the same dynamic library, in order to satisfy the conflicting requirements of various applications. I have also seen systems bricked by a faulty update of glibc, if they did not have any statically-linked rescue programs.

          On Windows such problems are much less frequent only because a great number of applications bundle with the them, in their own directory, the desired versions of various dynamic libraries, and Windows is happy to load those libraries. On UNIX derivatives, this usually does not work as the dynamic linker searches only standard places for libraries.

          Therefore, in my opinion static linking should always be the default, especially for something like the standard C library. Dynamic linking shall be reserved for some very special libraries, where there are strong arguments that this should be beneficial, i.e. that there really exists a need to upgrade the library without upgrading the main executable.

          Golang is probably an anomaly. C-based programs are rarely much bigger when statically linked than when dynamically linked. Only using "printf" is typically implemented in such a way that it links a lot into any statically-linked program, so the C standard libraries intended for embedded computers typically have some special lightweight "printf" versions, to avoid this overhead.

          • toast0 6 hours ago

            > In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.

            > On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications,

            OpenSSL is a good example of both useful and problematic updates. The number of updates that fixed a critical security problem but needed application changes to work was pretty high.

        • ranger_danger 3 minutes ago

          Would be nice if there was a binary format where you could easily swap out static objects for updated ones

        • zbentley 9 hours ago

          I've heard this many times, and while there might be data out there in support of it, I've never seen that, and my anecdotal experience is more complicated.

          In the most security-forward roles I've worked in, the vast, vast majority of vulnerabilities identified in static binaries, Docker images, Flatpaks, Snaps, and VM appliance images fell into these categories:

          1. The vendor of a given piece of software based their container image on an outdated version of e.g. Debian, and the vulnerabilities were coming from that, not the software I cared about. This seems like it supports your point, but consider: the overwhelming majority of these required a distro upgrade, rather than a point dependency upgrade of e.g. libcurl or whatnot, to patch the vulnerabilities. Countless times, I took a normal long-lived Debian test VM and tried to upgrade it to the patched version and then install whatever piece of software I was running in a docker image, and had the upgrade fail in some way (everything from the less-common "doesn't boot" to the very-common "software I wanted didn't have a distribution on its website for the very latest Debian yet, so I was back to hand-building it with all of the dependencies and accumulated cruft that entails").

          2. Vulnerabilities that were unpatched or barely patched upstream (as in: a patch had merged but hadn't been baked into released artifacts yet--this applied equally to vulns in things I used directly, and vulns in their underlying OSes).

          3. Massive quantities of vulnerabilities reported in "static" languages' standard libraries. Golang is particularly bad here, both because they habitually over-weight the severity of their CVEs and because most of the stdlib is packaged with each Golang binary (at least as far as SBOM scanners are concerned).

          That puts me somewhat between a rock and a hard place. A dynamic-link-everything world with e.g. a "libgolang" versioned separately from apps would address the 3rd item in that list, but would make the 1st item worse. "Updates are far quicker and easier" is something of a fantasy in the realm of mainstream Linux distros (or copies of the userlands of those distros packaged into container images); it's certainly easier to mechanically perform an update of dependency components of a distro, but whether or not it actually works is another question.

          And I'm not coming at this from a pro-container-all-the-things background. I was a Linux sysadmin long before all this stuff got popular, and it used to be a little easier to do patch cycles and point updates before container/immutable-image-of-userland systems established the convention of depending on extremely specific characteristics of a specific revision of a distro. But it was never truly easy, and isn't easy today.

      • rlpb 8 hours ago

        Imagine a fully statically linked version of Debian. What happens when there’s a security update in a commonly used library? Am I supposed to redownload a rebuild of basically the entire distro every time this happens, or else what?

        • electroly 8 hours ago

          Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

          • wpollock 3 hours ago

            > Steel-manning the idea, perhaps they would ship object files (.o/.a) and the apt-get equivalent would link the system? I believe this arrangement was common in the days before dynamic linking. You don't have to redownload everything, but you do have to relink everything.

            This was indeed comon for Unix. The only way to tune the systems (or even change the timezone) was to edit the very few source files and run make, which compiled those files then linked them into a new binary.

            Linking-only is (or was) much faster than recompiling.

          • holowoodman 5 hours ago

            But if I have to relink everything, I need all the makefiles, linker scripts and source code structure. I might as well compile it outright. On the other hand, I might as well just link it whenever I run it, like, dynamically ;)

          • rlpb 4 hours ago

            And then how would this be any different in practice from dynamic linking?

        • throwaway2046 6 hours ago

          Libraries already break their ABI so often that continuously rebuilding/relinking everything is inevitable.

          • rlpb 4 hours ago

            Debian manages perfectly well without.

            • saidinesh5 39 minutes ago

              Only because of the enormous efforts put in by debian package maintainers and it's infrastructure.

              If you're a an indie developer wanting your application to run on various debian based distros but the debian maintainers won't package your application, that's when you'd see why it's called DLL hell, how horribly fragmented the Linux packaging is and why even steam ships their whole run time.

              • rlpb 32 minutes ago

                Everything inside Debian is fine. That's most of the ecosystem apart from the very new stuff that isn't mature enough yet. Usually the reason something notable stays out if Debian long term is when that thing has such bad dependency hygiene that it cannot easily be brought up to standard.

        • yxhuvud an hour ago

          Honestly, that doesn't sound too bad if you have decent bandwidth.

        • paddim8 5 hours ago

          Then you update those dependencies. Not very difficult with a package manager. And most dependencies aren't used by a ton of programs in a single system anyway. It is not a big deal in practice.

          • rlpb 4 hours ago

            This would only work if you use dynamic linking. Updating dependencies in a statically built distribution would have no effect.

    • fc417fc802 11 hours ago

      Dynamic linking exists to make a specific set of tradeoffs. Neither better nor worse than static linking in the general sense.

    • abigail95 7 hours ago

      Why would I want to be constantly calling into code I have no control over, that may or may not exist, that may or may not be tampered with.

      I lose control of the execution state. I have to follow the calling conventions which let my flags get clobbered.

      To forego all of the above including link time optimization for the benefit of what exactly?

      Imagine developing a C program where every object file produced during compilation was dynamically linked. It's obvious why that is a stupid idea - why does it become less stupid when dealing with a separate library?

      • uecker 3 hours ago

        You call into dynamic libraries so that you do not need to recompile and distribute new binaries to all your users whenever there is a security issue or other critical fix in any of the dependencies.

        • sebastos 38 minutes ago

          But if I get to Bring My Own Dependencies, then I know the exact versions of all my dependencies. That makes testing and development faster because I don’t have to expend effort testing across many different possible platforms. And if development is just generally easier, then maybe it’s easier to react expediently to security notices and release updates as necessary.. .

    • vv_ 11 hours ago

      It's easier to distribute software fully self-contained, if you ignore the pain of statically linking everything together :)

    • flohofwoe 10 hours ago

      Dynamic libraries make a lot of sense as operating system interface when they guarantee a stable API and ABI (see Windows for how to do that) - the other scenarios where DLLs make sense is for plugin systems. But that's pretty much it, for anything else static linking is superior because it doesn't present an optimization barrier (especially for dead code elimination).

      No idea why the glibc can't provide API+ABI stability, but on Linux it always comes down to glibc related "DLL hell" problems (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).

      TL;DR: It's not static vs dynamic linking, just glibc being a an exceptionally shitty solution as operating system interface.

      • mgaunard 42 minutes ago

        Static linking is also an optimization barrier.

        LTO is really a different thing, where you recompile when you link. You could technically do that as part of the dynamic linker too, but I don't think anyone is doing it.

        There is a surprisingly high number of software development houses that don't (or can't) use LTO, either because of secrecy, scalability issues or simply not having good enough build processes to ensure they don't breach the ODR.

      • sebastos 30 minutes ago

        Genuine question - are there examples (research? old systems?) of the interface to the operating system being exposed differently than a library? How might that work exactly?

      • AshamedCaptain 2 hours ago

        > (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).

        In the era of containers, I do not understand why this is "Not trivial". I could do it with even a chroot.

      • uecker 3 hours ago

        I do not think it is difficult compiling against versions by using a container.

    • RicoElectrico 10 hours ago

      That would be a good point if said shared libraries did not break binary backwards compatibility and behaved more like winapi.

    • gethly 7 hours ago

      Isn't the sole reason why linux sucks(sucked?) for games and other software exactly that there is a gazillion of different libraries with different versions, so you have zero assumptions about the state of the OS, which makes making sw for it such a pain?

      • paddim8 5 hours ago

        Yes. Premature optimisation when it comes to dynamic linking is the reason for why the year of the Linux desktop is far away in my opinion

        • gethly 2 hours ago

          I just remember Jonathan Blow mentioning this in one of his streams.

  • pilif 9 hours ago

    Isn't this asking for the exact trouble musl wanted so spare you from by disabling dlopen()?

  • tuhgdetzhh 5 hours ago

    The best binary compatibily you can get on Linux is via Wine.

    Source: https://blog.hiler.eu/win32-the-only-stable-abi/

  • aspbee555 5 hours ago

    I managed to get this combo going not too long ago with my musl rust app but I found that even after it all was compiled and loading the lib it did not function properly because the library I loaded still depended on libc functions. even with everything compiled into a huge monolithic musl binary it couldn't find something graphics related

    I eventually decided to keep the tiny musl app and make a companion app in a secondary process as needed (since the entire point of me compiling musl was cross platform linux compatibility/stability)

  • leni536 7 hours ago

    Do I get this right that this effectively dlopens glibc (indirectly) into an executable that is statically linked to musl? How can the two runtimes coexist? What about malloc/free? AFAIK both libc's allocators take ownership of brk, that can't be good. What about malloc/free across the dynamic library interface? There are certainly libraries that hand out allocated objects and expect the user to free them, but that's probably uncommon in graphics.

    • Splizard 3 hours ago

      You have to tell musl to use mmap instead of brk. You're right that it doesn't work in all cases but as long as you switch TLS on calls (and callbacks), at least with a project the size of Godot, you can approach a workable solution.

      • leni536 2 hours ago

        > You have to tell musl to use mmap instead of brk.

        How do I do that? Is there a documented configuration of musl's allocator?

  • Meneth 12 hours ago

    That seems mostly useful for proprietary programs. I don't like it.

    • Faelian2 3 hours ago

      I did wrote a small open-source tool in Rust. And I too did encounter that kind of issue when I did start to build a .deb.

      Honestly, it was the kind of bug that is not fun to fix, because it's really about dependency, and not some fun code issue. There is no point in making our life harder with this to gatekeep proprietary software to run on our platform.

    • juliangmp 11 hours ago

      Why? Foss software also benefits from less dependency hell.

      • breezykoi 11 hours ago

        For distro-packaged FOSS, binary compatibility isn't really a problem. Distributions like Debian already resolve dependencies by building from source and keeping a coherent set of libraries. Security fixes and updates propagate naturally.

        Binary compatibility solutions mostly target cases where rebuilding isn't possible, typically closed source software. Freezing and bundling software dependencies ultimately creates dependency hell rather than avoiding it.

        • koffiezet 10 hours ago

          It however shifts a lot of the complexity of building the application to the distro maintainer, or a software maintainer has to prioritize for which distribution they choose to build and maintain a package, because supporting them all is a nightmare and an ever shifting moving target. And it's not just a distribution problem, it's even a distribution version/release problem.

          Look at the hoops you sometimes have to jump through or hacks you have to apply to make something work on Nix, just because there is no standardization or build processes assume library locations etc. And if you then raise an issue with the software maintainer - the response is often "but we don't support Nix". And if they're not Nix/Nixos users, can you blame them?

          If you've ever had to compile a modern/recent software package for an old distro (I've had to do this for old RH distro's on servers which due to regulations could not be upgraded) - you're in a world of pain. And both distro and software maintainers will say "not my problem, we don't support this" - and I fully understand their stance on that, because it is far from straight forward, and only serves a limited audience.

        • account42 10 hours ago

          There is however also the long tail of open source software that isn't packaged for your favorite distribution.

          • breezykoi 10 hours ago

            That is very true. But because it is open source, one can request for packaging, contribute a package, use a third-party repository, or build it from source when needed.

    • seba_dos1 12 hours ago

      Yeah, in my 20 years of using and developing on GNU/Linux the only binary compatibility issues I experienced that I can think of now were related to either Adobe Flash, Adobe Reader or games.

      Adobe stuff is of the kind that you'd prefer to not exist at all rather than have it fixed (and today you largely can pretend that it never existed already), and the situation for games has been pretty much fixed by Steam runtimes.

      It's fine that some people care about it and some solutions are really clever, but it just doesn't seem to be an actual issue you stumble on in practice much.

      • paddim8 5 hours ago

        Probably because your distro purposefully keeps software out of date because it is too fragile otherwise. I don't think that is reasonable at all for desktop use.

      • whizzter 10 hours ago

        The solution to games is to load Windows games instead of Linux binaries.

        Basically the way for the year of the Linux desktop is to become Windows.

        • seba_dos1 10 hours ago

          These days Linux binaries usually work fine, even older ones, and when they don't the reason is that they often don't get the same attention as their Windows counterparts.

  • einpoklum 13 hours ago

    This seems interesting even regardless of go. Is it realistic to create an executable which would work on very different kinds of Linux distros? e.g. 32-bit and 64-bit? Or maybe some general framework/library for building an arbitrary program at least for "any libc"?

    • quesomaster9000 13 hours ago

      Cosmopolitan goes one further: [binaries] that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS on AMD64 and ARM64

      https://justine.lol/cosmopolitan/

      • oguz-ismail2 12 hours ago

        >Linux

        if you configure binfmt_misc

        >Windows

        if you disable Windows Defender

        >OpenBSD

        only older versions

        • account42 12 hours ago

          Yeah while APE is a technically impressive trick, these issues far outweigh the minor convenience of having a single binary.

          For most cases, a single Windows exe that targets the oldest version you want to support plus a single Glibc binary that dynamically links against the oldest version you want to support and so on is still the best option.

        • yjftsjthsd-h 6 hours ago

          >> Linux

          > if you configure binfmt_misc

          I don't think that's a requirement, it'll just fall back to the shell script bootstrap without it.

          • oguz-ismail2 6 hours ago

            On some distros, yes. On others it'll fire up Wine for whatever reason

            • yjftsjthsd-h 5 hours ago

              Okay, yes, if you configure binfmt_misc for WINE and not APE then PE-compatible binaries will get run with WINE and not APE. That feels unfair.

              • oguz-ismail2 4 hours ago

                >if you configure binfmt_misc for WINE

                It came preconfigured on Ubuntu 20.04 and 22.04, don't know about newer versions.

      • dontdoxxme 12 hours ago

        Clearly a joke if it uses the .lol tld.

        • account42 12 hours ago

          It's his personal website lol.

          • hyperbolablabla 11 hours ago

            Justine identifies as a woman.

            • hofrogs 11 hours ago

              "identifies as" is an unnecessarily dismissive choice of words. She is a woman.

              • hyperbolablabla 3 hours ago

                My statement was a fact, and in my opinion not politically loaded, yet respectful to Justine. I chose my words carefully.

    • sambuccid 12 hours ago

      Appimage exists that packs linux applications into a single executable file that you just download and open. It works on most linux distros

      • greyw 12 hours ago

        I vaguely remember that Appimage-based programs would fail for me because of fuse and glibc symbol version incompatibilties.

        Gave up them afterwards. If I need to tweak dependencies might as well deal with the packet manager of my distro.

    • iberator 12 hours ago

      Yup. Just compile it as static executable. Static binaries are very undervalued imo.

      • account42 12 hours ago

        As TFA points out at the beginning, it's not so simple if you want to use the GPU.

      • flohofwoe 12 hours ago

        The "just" is doing a lot of heavylifting here (as detailed in the article), especially for anything that's not a trivial cmdline tool.

        • Xraider72 11 hours ago

          In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.

          If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?

          • flohofwoe 11 hours ago

            It becomes tricky when you need to use system DLLs like X11 or GL/Vulkan (so you need to use the 'hacks' described in the article to work around that) - the problem is that those system DLLs then bring a dynamically linked glibc into the process, so suddenly you have two C stdlibs running side by side and the question is whether this works just fine or causes subtle breakage under the hood (e.g. the reason why MUSL doesn't implement dlopen).

            E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.

            • account42 10 hours ago

              X11 actually has a stable wire protocol so you don't strictly need any dynamic libraries for that - it's just that no one bothers because if you want X11 then you most likely also want GPU access where you do need to load hardware-specific libraries.

        • qznc 12 hours ago

          Ack. I went down that rabbit hole to "just" build a static Python: https://beza1e1.tuxen.de/python_bazel.html

      • pjmlp 12 hours ago

        We had a time when static binaries where pretty much the only thing we had available.

        Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.

        Got to put that RAM to use.

        • flohofwoe 10 hours ago

          The thing with static linking is that it enables aggressive dead code elimination (e.g. DLL are a hard optimization barrier).

          Even with multiple processes sharing the same DLL I would be surprised if the alternative of those processes only containing the code they actually need would increase RAM usage dramatically, especially since most processes that run in the background on a typical Linux system wouldn't event even need to go through glibc but could talk directly to the syscall interface.

          DLLs are fine as operating system interface as long as they are stable (e.g. Windows does it right, glibc doesn't). But apart from operating system interfaces and plugins, overusing dynamic linking just doesn't make a lot of sense (like on most Linux systems with their package managers).

          • pjmlp 10 hours ago

            While at the same time it prevents extending applications, the alternatives being multiple processes using OS IPC, all of them much slower and heavier on resources than an indirect call on a dynamic library.

            We started there in computing history, and outside Linux where this desire to go to the past prevails, moved on to better ways including on other UNIX systems.

        • jacquesm 11 hours ago

          I've been static linking my executables for years. The downside, that you might end up with an outdated library, is no match for the upsite: just take the binary and run it. As long as you're the only user of the system and the code is your own you're going to be just fine.

        • account42 10 hours ago

          I don't think dynamic libraries fail at "utilizing" any available RAM.

          • pjmlp 10 hours ago

            Think of any program that uses dynamic libraries as extension mechanism, and now replace it with standard UNIX processes, each using any form of UNIX IPC to talk with the host process instead.

            • account42 9 hours ago

              In theory there might be a different RAM usage with the two approaches. In practice there is not.

              • pjmlp 8 hours ago

                And your measurements are available where?

  • netbioserror 9 hours ago

    I've been statically linking Nim binaries with musl. It's fantastic. Relatively easy to set up (just a few compiler flags and the musl toolchain), and I get an optimized binary that is indistinguishable from any other static C Linux binary. It runs on any machine we throw it at. For a newer-generation systems language, that is a massive selling point.

    • cb321 2 hours ago

      Yeah. I've been doing this for almost 10 years now. It's not APE/cosmopolitan (which also "kinda works" with Nim but has many lowest common denominator platform support issues, e.g. posix_fallocate). However, it does let you have very cross-Linux portable binaries. Maybe beyond Linux.

      Some might appreciate a concrete instance of this advice inline here. For `foo.nim`, you can just add a `foo.nim.cfg`:

          @if gcc:
            gcc.exe       = "musl-gcc"
            gcc.linkerexe = "musl-gcc"
            passL         = "-static -s" @end
      
      There is also a "NimScript" syntax you could use a `foo.nims`:

          if defined gcc:  # nim.cfg runs faster than NimScript
            switch "gcc.exe"      , "musl-gcc"
            switch "gcc.linkerexe", "musl-gcc"
            switch "passL"        , "-static -s"
    • zoobab 8 hours ago

      I have an idea for a static linux distribution based on musl, with either an Alpine rebuild or Gentoo-musl:

      http://stalinux.wikidot.com

      The documentation to make static binary with GLibc is sparce for a reason, they don't like static binaries.

  • weebull 10 hours ago

    If you're using dlopen(), you're just reimplementing the dynamic linker.

    • 112233 9 hours ago

      that's cute, but dismissive, sort of like "if you use popen(), you are reimplementing bash". There is so much hair in ld nobody wants to know about — parsing elf, ctors/dtors, ...