> We did not want to spend time maintaining a backward compatible parser or doing code archaeology. So this option was discarded.
Considering all of the effort and hoop-jumping involved in the route that was chosen, perhaps this decision might be worth revisiting.
In hindsight, maintaining a parser might be easier and more maintainable when compared to the current problems that were overcome and the future problems that will arise if/when the systemd libraries decide to change their C API interfaces.
One benefit of a freestanding parser is that it could be made into a reusable library that others can use and help maintain.
I ran into this issue when porting term.everything[0] from typescript to go. I had some c library dependencies that I did need to link, so I had to use cgo.
My solution was to do the build process on alpine linux[1] and use static linking[2]. This way it statically links musl libc, which is much friendlier with static linking than glibc.
Now, I have a static binary that runs in alpine, Debian, and even bare containers.
Since I have made the change, I have not had anyone open any issues saying they had problems running it on their machines. (Unlike when I was using AppImages, which caused much more trouble than I expected)
You can use this to dynamic load shared objects / DLLs so in the OP example they could disable systemd support if the systemd shared object did not load.
This technique is used in the cgofuse library ( https://github.com/winfsp/cgofuse ) rclone uses which means rclone can run even if you don't have libfuse/winfsp installed. However the rclone mount subcommand won't work.
The purego lib generalizes this idea. I haven't got round to trying this yet but it looks very promising.
I am using purego indirectly in two pet projects of mine. While it has its own issues it definitely solves the issue of cross-compilation.
In this particular case it may be that they will need to write a wrapper to abstract differences between the systemd C API if it is not stable, but at least they still can compile a binary from macOS to Linux without issues.
The other issue as other said is to use journalctl and just parse the JSON format. Very likely that this would be way more stable, but not sure if it is performant enough.
IMO this is the best approach, but it is worth noting that musl libc is not without its caveats. I'd say for most people it is best to tread carefully and make sure that differences between musl libc and glibc don't cause additional problems for the libraries you are linking to.
There is a decent list of known functional differences on the musl libc wiki:
Overall, though, the vast majority of software works perfectly or near perfectly on musl libc, and that makes this a very compelling option indeed, especially since statically linking glibc is not supported and basically does not work. (And obviously, if you're already using library packages that are packaged for Alpine Linux in the first place, they will likely already have been tested on musl libc, and possibly even patched for better compatibility.)
From the .go file, you just do `// #cgo LDFLAGS: -L. -lfoo`.
You definitely do not need Alpine Linux for this. I have done this on Arch Linux. I believe I did not even need musl libc for this, but I potentially could have used it.
I did not think I was doing something revolutionary!
In fact, let me show you a snippet of my build script:
# Build the Go project with the static library
if go build -o $PROG_NAME -ldflags '-extldflags "-static"'; then
echo "Go project built with static library linkage"
else
echo "Error: Failed to build the Go project with static library"
exit 1
fi
# Check if the executable is statically linked
if nm ./$PROG_NAME | grep -q "U "; then
echo "Error: The generated executable is dynamically linked"
exit 1
else
echo "Successfully built and verified static executable '$PROG_NAME'"
fi
And like I said, the .go file in question has this:
// #cgo LDFLAGS: -L. -lfoo
It works perfectly, and should work on any Linux distribution.
I use alpine for this [1] reason, but I will admit that this is a premature-optimization. I haven’t actually ran into the problem myself.
——
Your code is great, I do basically the same thing (great minds think alike!). The only thing I want to add is that cgo supports pkg-config directly [2] via
// #cgo pkg-config: $lib
So you don’t have to pass in linker flags manually. It’s incredibly convenient.
> do the build process on alpine linux and […] statically link musl libc
IIRC it used to be common to do builds on an old version of RHEL or CentOS and dynamically link an old version of glibc. Binaries would then work on newer systems because glibc is backwards compatible.
If you need glibc for any kind of reason, that approach is still used. But that won’t save you if no glibc is available. And since the folks here want to produce a musl build anyways for alpine, the easier approach is to just go for musl all the way.
It works so far! No major gotchas that I know of yet. From the perspective of the apps, they are just talking to a normal Wayland compositor, so everything works as expected.
Just try it for your workflow, and if you run into any problems just open an issue and I’ll fix it.
I didn't see an explanation in the README that part of what the first GIF[1] shows is an effect created by video editing software (and not a screencapture that's just demonstrating the program actually running). "Screen images simulated" are the words usually chosen to start off the disclaimers in fine print shown at the bottom of the screen when similar effects appear in commercials. I think that it would make sense to adopt a similar explanation wrt the effect used for the GIF.
I think it is a big stretch calling this visual effect lying.
I don’t know if it is a cultural American thing or just difference in interpretation but I had no difficulty understanding that this was a visual effect. But in my country ads don’t come with disclaimers. Do you feel like these disclaimers are truly helpful?
My assumption was that they were using a C API just from reading the headline. I don't use Go but these sorts of problems are common to any project doing that in just about any language.
Was there not a third option: Calling the journalctl CLI as a child process and consume the parsed logs from the standard output? This might have avoided both the requirement to use CGO and also to write a custom parser. But I guess I am missing something.
Yeah looks like they missed the forest for the trees.
I see this kind of thing in our industry quite often; some Rube Goldberg machine being invented and kept on life support for years because of some reason like this, where someone clearly didn’t do the obvious thing and everyone now just assumes it’s the only solution and they’re married to it.
But I’m too grumpy, work me is leaking into weekend me. I had debates around crap like this all week and I now see it everywhere.
Also there is a --json (or -o json) flag for journalctl which will output line based json log entries. And it can simply be called with a Command as you pointed out.
It's generally less robust to run CLI tools and scrape the output. Usually it isn't intended to be machine readable, and you have to handle extra failure modes, like incompatible tool versions, missing tools, incorrect parsers, etc.
journalctl is designed for these use cases and has options to solve those issues. The lazy part here is you not doing any research about this tool before dismissing it as "not best practice", which is exactly what the fuckups who wrote this article did.
Once you use CGO, portability is gone. Your binary is no longer staticly compiled.
This can happen subtley without you knowing it. If you use a function in the standard library that happens to call into a CGO function, you are no longer static.
This happens with things like os.UserHomeDir or some networking things like DNS lookups.
You can "force" go to do static compiling by disabling CGO, but that means you can't use _any_ CGO. Which may not work if you require it for certain things like sqlite.
os.UserHomeDir is specified to read the HOME environment variable, so it doesn’t require CGo. os/user does, but only to support NSS and LDAP, which are provided by libc. That’s also why net requires CGo- for getaddrinfo using resolv.conf
I think this is true for nearly all compiled languages. I had the same fun with rust and openSSL and glibC. OP didn’t mentioned the fun with glib-c when compiling on a fairly recent distro and trying it to run on an older one. There is the “many Linux” project which provides docker images with a minimum glib c version installed so it’s compatible with newer ones.
The switch to a newer open ssl version on Debian/Ubuntu created some issues for my tool. I replaced it with rust tls to remove the dynamic linked library. I prefer complete statically linked binaries though. But that is really hard to do and damn near impossible on Apple systems.
You're thinking of cross platform codebases. There's nothing about cross compilation that stops the toolchain from knowing what APIs are present & not present on a target system.
Cross compilation and cross platform are synonymous in compiled languages, in regards of many issues that one needs to care about.
Cross platform goes beyond in regards to UI, direction locations, user interactions,...
Yeah, if you happen to have systemd Linux libraries on macOS to facilitate cross compilation into a compatible GNU/Linux system than it works, that is how embedded development has worked for ages.
What doesn't work is pretending that isn't something to care about.
> Cross compilation and cross platform are synonymous in compiled languages
Err, no. Cross-platform means the code can be compiled natively on each platform. Cross-compilation is when you compile the binaries on one platform for a different platform.
Not at all, cross platform means executing the same application in many platforms, regardless of the hardware and OS specific features of each platform.
Cross-compilation is useless if you don't actually get to executed the created binaries in the target platform.
Now, how do you intend to compile from GNU/Linux into z/OS, so that we can execute the generated binary out from the C compiler ingesting the code written in GNU/Linux platform, in the z/OS language environment inside an enclave, not configured in POSIX mode?
Using z/OS, if you're feeling more modern, it can be UWP sandboxed application with identity in Windows.
Hashicorp's Vault go binary is a whopping 512Mb beast. Recently considered using its agent mode to grab secrets for applications in containers but the size of the layer it adds is unviably big. And they don't seem interested into making a split server/client binary either...
I tink the title is a bit misleading. This is about very low level metrics collection from the system which by definition is very system dependent. The term “portable” in a programming language usually means portability for applications but this more portability of utilities.
Expecting a portable house and a portable speaker to have the same definition of portable is unfair.
You hit this real quick when trying to build container images from the scratch. Theoretically you can drop a Go binary into a blank rootfs and it will run. This works most of the time, but anything that depends on Go's Postgres client requires libpq which requires libc. Queue EFILE runtime errors after running the container.
If you really need a portable binary that uses shared libraries I would recommend building it with nix, you get all the dependencies including dynamic linker and glibc.
This article reminds me of the days before LLMs ruled the world, when the word "agent" was most commonly used in the DevOps area, representing the program that ran on a remote machine to execute dispatched jobs or send metrics. Now I wonder how many developers would look at "agent" and think of this meaning.
Use dlopen? I haven’t tried this in Go, but if you want a binary that optionally includes features from an external library, you want to use dlopen to load it.
I've got a pure Go journald file writer that works to some extent—it doesn't split, compress, etc, but it produces journal files that journalctl/sdjournal can read, concurrently. Only stress tested by running a bunch of parallel integration tests, will most likely not maintain it seriously, total newbie garbage, etc, but may be of interest to someone. I haven't really seen any other working journald file writers.
Interesting that it uses the C API to collect journals. I would’ve thought to just invoke journalctl CLI. On platforms like macOS where the CLI doesn’t exist it’s an error when you exec, not a build time error.
That's also what gopsutils does, IIRC: it tries to look up process information with kernel APIs but can fall back to invoking /usr/bin/ps (which is setuid root on most systems) at the cost of being much less performant.
That's really not such a weird choice. The systemd library is pervasive and compatible.
The weird bit is the analysis[1], which complains that a Go binary doesn't run on Alpine Linux, a system which is explicitly and intentionally (also IMHO ridiculously, but that's editorializing) binary-incompatible with the stable Linux C ABI as it's existed for almost three decades now. It's really no more "Linux" than is Android, for the same reason, and you don't complain that your Go binaries don't run there.
[1] I'll just skip without explaination how weird it was to see the author complain that the build breaks because they can't get systemd log output on... a mac.
The macOS bit wasn’t about trying to get systemd logs on mac. The issue was that the build itself fails because libsystemd-dev isn’t available. We (naively) expected journal support to be something that we can detect and handle at runtime.
Well... yeah. It's a Linux API for a Linux feature only available on Linux systems. If you use a platform-specific API on a multiplatform project, the portability work falls on you. Do you expect to be able to run your Swift UI on Windows? Same thing!
Cross-compiling doesn't work because you're not defining your dependencies correctly and relying on the existence of things like system libraries and libc. Use `zig cc` with Go which will let you compile against a stub Glibc, or go all the way and use a hermetic build system (you should do this always anyhow).
It's yet another example of Go authors just implementing the least-effort without even a slight thought to what it would mean down the line, creating a huge liability/debt forever in the language.
Go was never truly portable on Linux unfortunately due to its dependency on libc for DNS and user name resolution (because of PAM and other C-only API). Sure, pure Go implementation exists, but it doesn't cover all cases, so, in order to build a "good" binary for Linux you still needed to build the binary on (oldest supported) Linux distro.
If your production doesn't have any weird PAM or DNS then you can indeed just cross-compile everything and it works
This seems to imply that Go's binaries are otherwise compatible with multiple platforms like amd64 and arm64, other than the issue with linking dynamic libraries.
I suspect that's not true either even if it might be technically possible to achieve it through some trickery (and why not risc-v, and other architectures too?).
Of course you still need one binary per CPU architecture. But when you rely on a dynamic link, you need to build from the same architecture as the target system. At that point cross-compiling stops being reliable.
I am complaining about the language (phrasing) used: a Python, TypeScript or Java program might be truly portable across architectures too.
Since architectures are only brought up in relation to dynamic libraries, it implied it is otherwise as portable as above languages.
With that out of the way, it seems like a small thing for the Go build system if it's already doing cross compilation (and thus has understanding of foreign architectures and executable formats). I am guessing it just hasn't been done and is not a big lift, so perhaps look into it yourself?
they're only portable if you don't count the architecture specific runtime that you need to somehow obtain...
go doesn't require dynamic linking for C, if you can figure out the right C compiler flags you can cross compile statically linked go+c binaries as well.
It's a tooling issue. No one has done the work to make things work as smoothly as they could.
Traditionally, cross-compilers generally didn't even work the way that the Zig and Go toolchains approach it—achieving cross-compilation could be expected to be a much more trying process. The Zig folks and the Go folks broke with tradition by choosing to architect their compilers more sensibly for the 21st century, but the effects of the older convention remains.
In my experience, the cross-compiler will refuse to link against shared libraries that "don't exist", which they usually don't in a cross compiler setup (e.g. cross compiling an aarch64 application that uses SDL on a ppc64le host with ppc64le SDL libraries)
The usual workaround, I think, is to use dlopen/dlsym from within the program. This is how the Nim language handles libraries in the general case: at compile time, C imports are converted into a block of dlopen/dl* calls, with compiler options for indicating some (or all) libraries should be passed to the linker instead, either for static or dynamic linking.
Alternatively I think you could "trick" the linker with a stub library just containing the symbol names it wants, but never tried that.
You just need a compiler & linker that understand the target + image format, and a sysroot for the target. I've cross compiled from Linux x86 clang/lld to macOS arm64, all it took was the target SDK & a couple of env vars.
Clang knows C, lld knows macho, and the SDK knows the target libraries.
The conclusion of the article says that it's not the language problem either. Under the title "So, is Go the problem?" Or do you mean something else here?
Given that the title implies the opposite, I think it's a fair criticism. Pointing out clickbait might be tedious, but not more so than clickbait itself.
There's no such thing as a portable application; only programs limited enough to be lucky not to conflict with the vagaries of different systems.
That said, in my personal experience, the most portable programs tend to be written in either Perl or Shell. The former has a crap-ton of portability documentation and design influence, and the latter is designed to work from 40 year old machines up to today's. You can learn a lot by studying old things.
Well, that was pretty obvious that the portability is gone, especially when you start linking into systemd, even on the host system you have to link with the shared libs into systemd, you cannot link statically.
This stuff is out of my frame of reference. I've never used Go before and have never had the need to go this low level (C APIs, etc); so please keep this in mind with my following questions, which are likely to sound stupid or ignorant.
Can this binary not include compiled dependacies along side it? I'm thinking like how on windows for portable apps they include the DLLs and other dependant exes in subfolders?
Out of interest, and in relation to a less well liked Google technology, could dart produce what they are after? My understanding is dart can produce static binaries, though I'm not sure if these are truly portable compile once run everywhere sense.
> In the observability world, if you're building an agent for metrics and logs, you're probably writing it in Go.
I'm pretty unconvinced that this is the case unless you happen to be on the CNCF train. Personally I'd write in Rust these days, C used to be very common too.
All of this, every last bit of complexity and breakage and sweat, is downstream of this:
> Journal logs are not stored in plain text. They use a binary format
And it was entirely predictable and predicted that this sort of problem would be the result when that choice was made.
> We did not want to spend time maintaining a backward compatible parser or doing code archaeology. So this option was discarded.
Considering all of the effort and hoop-jumping involved in the route that was chosen, perhaps this decision might be worth revisiting.
In hindsight, maintaining a parser might be easier and more maintainable when compared to the current problems that were overcome and the future problems that will arise if/when the systemd libraries decide to change their C API interfaces.
One benefit of a freestanding parser is that it could be made into a reusable library that others can use and help maintain.
That's what I was thinking too. A go native library is 10 times better in the go ecosystem than a c library linked to a go executable.
Also in the age of AI it seems possible to have it do the rewrite for you, for which you can iterate on further.
I ran into this issue when porting term.everything[0] from typescript to go. I had some c library dependencies that I did need to link, so I had to use cgo. My solution was to do the build process on alpine linux[1] and use static linking[2]. This way it statically links musl libc, which is much friendlier with static linking than glibc. Now, I have a static binary that runs in alpine, Debian, and even bare containers.
Since I have made the change, I have not had anyone open any issues saying they had problems running it on their machines. (Unlike when I was using AppImages, which caused much more trouble than I expected)
[0] https://github.com/mmulet/term.everything look at distribute.sh and the makefile to see how I did it.
[1]in a podman or docker container
[2] -ldflags '-extldflags "-static"'
That is a nice approach. I'll have to give that a try with rclone. I tried lots of things in the past but not using Alpine which is a great idea
Another alternative is
https://github.com/ebitengine/purego
You can use this to dynamic load shared objects / DLLs so in the OP example they could disable systemd support if the systemd shared object did not load.
This technique is used in the cgofuse library ( https://github.com/winfsp/cgofuse ) rclone uses which means rclone can run even if you don't have libfuse/winfsp installed. However the rclone mount subcommand won't work.
The purego lib generalizes this idea. I haven't got round to trying this yet but it looks very promising.
I am using purego indirectly in two pet projects of mine. While it has its own issues it definitely solves the issue of cross-compilation.
In this particular case it may be that they will need to write a wrapper to abstract differences between the systemd C API if it is not stable, but at least they still can compile a binary from macOS to Linux without issues.
The other issue as other said is to use journalctl and just parse the JSON format. Very likely that this would be way more stable, but not sure if it is performant enough.
IMO this is the best approach, but it is worth noting that musl libc is not without its caveats. I'd say for most people it is best to tread carefully and make sure that differences between musl libc and glibc don't cause additional problems for the libraries you are linking to.
There is a decent list of known functional differences on the musl libc wiki:
https://wiki.musl-libc.org/functional-differences-from-glibc...
Overall, though, the vast majority of software works perfectly or near perfectly on musl libc, and that makes this a very compelling option indeed, especially since statically linking glibc is not supported and basically does not work. (And obviously, if you're already using library packages that are packaged for Alpine Linux in the first place, they will likely already have been tested on musl libc, and possibly even patched for better compatibility.)
I use `-ldflags '-extldflags "-static"` as well.
From the .go file, you just do `// #cgo LDFLAGS: -L. -lfoo`.
You definitely do not need Alpine Linux for this. I have done this on Arch Linux. I believe I did not even need musl libc for this, but I potentially could have used it.
I did not think I was doing something revolutionary!
In fact, let me show you a snippet of my build script:
And like I said, the .go file in question has this: It works perfectly, and should work on any Linux distribution.I use alpine for this [1] reason, but I will admit that this is a premature-optimization. I haven’t actually ran into the problem myself.
——
Your code is great, I do basically the same thing (great minds think alike!). The only thing I want to add is that cgo supports pkg-config directly [2] via
So you don’t have to pass in linker flags manually. It’s incredibly convenient.[1]https://stackoverflow.com/questions/57476533/why-is-statical...
[2]https://github.com/mmulet/term.everything/blob/def8c93a3db25...
> do the build process on alpine linux and […] statically link musl libc
IIRC it used to be common to do builds on an old version of RHEL or CentOS and dynamically link an old version of glibc. Binaries would then work on newer systems because glibc is backwards compatible.
Does anyone still use that approach?
If you need glibc for any kind of reason, that approach is still used. But that won’t save you if no glibc is available. And since the folks here want to produce a musl build anyways for alpine, the easier approach is to just go for musl all the way.
Note that you don't have to compile on an Alpine system to achieve this. These instructions should work on most distros:
https://www.arp242.net/static-go.html
> and even bare containers.
Strange, i thought the whole point of containers was to solve this problem.
Depends how much you care about the size and security footprint of your container images.
What troubles did you have with AppImages?
List of troubles:
[1]https://github.com/mmulet/term.everything/issues/28
[2]https://github.com/mmulet/term.everything/issues/18 (although this issue later gets sidetracked to a build issue)
[3]https://github.com/mmulet/term.everything/issues/14
[4]https://github.com/mmulet/term.everything/issues/7
Huh. Does term.everything just work, or are there some gotchas? This seems like it could be supremely useful!
It works so far! No major gotchas that I know of yet. From the perspective of the apps, they are just talking to a normal Wayland compositor, so everything works as expected. Just try it for your workflow, and if you run into any problems just open an issue and I’ll fix it.
I didn't see an explanation in the README that part of what the first GIF[1] shows is an effect created by video editing software (and not a screencapture that's just demonstrating the program actually running). "Screen images simulated" are the words usually chosen to start off the disclaimers in fine print shown at the bottom of the screen when similar effects appear in commercials. I think that it would make sense to adopt a similar explanation wrt the effect used for the GIF.
1. <https://github.com/mmulet/term.everything/blob/main/resource...>
Why would an open source project need to have any disclaimer? They are not selling anything.
Because lying is wrong even when open source projects do it.
I think it is a big stretch calling this visual effect lying.
I don’t know if it is a cultural American thing or just difference in interpretation but I had no difficulty understanding that this was a visual effect. But in my country ads don’t come with disclaimers. Do you feel like these disclaimers are truly helpful?
I don't feel that the person I responded to is lying or being intentionally deceptive.
> “in commercials where such effects appear”
Good thing this isn’t a commercial then.
So you can’t pull in c libraries built for different distributions and expect this to work.
If you use pure go, things are portable. The moment you use C API, that portability doesn’t exist. This should be apparent.
My assumption was that they were using a C API just from reading the headline. I don't use Go but these sorts of problems are common to any project doing that in just about any language.
Was there not a third option: Calling the journalctl CLI as a child process and consume the parsed logs from the standard output? This might have avoided both the requirement to use CGO and also to write a custom parser. But I guess I am missing something.
Yeah looks like they missed the forest for the trees.
I see this kind of thing in our industry quite often; some Rube Goldberg machine being invented and kept on life support for years because of some reason like this, where someone clearly didn’t do the obvious thing and everyone now just assumes it’s the only solution and they’re married to it.
But I’m too grumpy, work me is leaking into weekend me. I had debates around crap like this all week and I now see it everywhere.
Also there is a --json (or -o json) flag for journalctl which will output line based json log entries. And it can simply be called with a Command as you pointed out.
This was the first thought that occurred to me too when I saw this post.
It's generally less robust to run CLI tools and scrape the output. Usually it isn't intended to be machine readable, and you have to handle extra failure modes, like incompatible tool versions, missing tools, incorrect parsers, etc.
It's the lazy-but-bad solution.
journalctl is designed for these use cases and has options to solve those issues. The lazy part here is you not doing any research about this tool before dismissing it as "not best practice", which is exactly what the fuckups who wrote this article did.
journalctl with -o export produces a binary interchange format. Would you rather have bugs or API rot from that, or in an internal tool?
Has nothing to do with go. You added a dependency which is not portable. It is well known that systemd project only targets Linux.
Vendorise systemd and compile only the journal parts, if they are portable and can be isolated from the rest. Otherwise just shell out to journalctl.
Once you use CGO, portability is gone. Your binary is no longer staticly compiled.
This can happen subtley without you knowing it. If you use a function in the standard library that happens to call into a CGO function, you are no longer static.
This happens with things like os.UserHomeDir or some networking things like DNS lookups.
You can "force" go to do static compiling by disabling CGO, but that means you can't use _any_ CGO. Which may not work if you require it for certain things like sqlite.
You can definitely use CGO and still build statically, but you do need to set ldflags to include -static.
You can even cross-compile doing that.
Yes, indeed, I do.
> Which may not work if you require it for certain things like sqlite.
there is cgo-less sqlite implementation https://github.com/glebarez/go-sqlite it seems to not be maintained much tho
You're linking to a different version - this is the one that most people use https://github.com/modernc-org/sqlite
Yes and no, the package above is a popular `database/sql` driver for the same SQLite port you linked.
You don't need CGO for SQLite in most cases; I did a deep dive into it here.
https://til.andrew-quinn.me/posts/you-don-t-need-cgo-to-use-...
> This happens with things like os.UserHomeDir or some networking things like DNS lookups.
The docs do not mention this CGO dependency, are you sure?
https://pkg.go.dev/os#UserHomeDir
I was surprised too, that I had to check the docs, so I assume the user was misinformed.
Perhaps I misremembered or things changed? For instance, the os/user results in a dynamicly linked executable: https://play.golang.com/p/7QsmcjJI4H5
There are multiple standard library functions that do it.. I recall some in "net" and some in "os".
os.UserHomeDir is specified to read the HOME environment variable, so it doesn’t require CGo. os/user does, but only to support NSS and LDAP, which are provided by libc. That’s also why net requires CGo- for getaddrinfo using resolv.conf
There are at least a couple of ways to run SQLite without CGO.
I think the standard answer here is modernc.org/sqlite.
Careful, you're responding to the author of a wasm-based alternative.
No need to be careful. I won't bite. ;)
I think this is true for nearly all compiled languages. I had the same fun with rust and openSSL and glibC. OP didn’t mentioned the fun with glib-c when compiling on a fairly recent distro and trying it to run on an older one. There is the “many Linux” project which provides docker images with a minimum glib c version installed so it’s compatible with newer ones. The switch to a newer open ssl version on Debian/Ubuntu created some issues for my tool. I replaced it with rust tls to remove the dynamic linked library. I prefer complete statically linked binaries though. But that is really hard to do and damn near impossible on Apple systems.
And a set of people rediscovered why cross compiling only works up to certain extent, regardless of the marketing on the tin.
The point one needs to touch APIs that only exists on the target system, the fun starts, regardless of the programming language.
Go, Zig, whatever.
You're thinking of cross platform codebases. There's nothing about cross compilation that stops the toolchain from knowing what APIs are present & not present on a target system.
Cross compilation and cross platform are synonymous in compiled languages, in regards of many issues that one needs to care about.
Cross platform goes beyond in regards to UI, direction locations, user interactions,...
Yeah, if you happen to have systemd Linux libraries on macOS to facilitate cross compilation into a compatible GNU/Linux system than it works, that is how embedded development has worked for ages.
What doesn't work is pretending that isn't something to care about.
> Cross compilation and cross platform are synonymous in compiled languages
Err, no. Cross-platform means the code can be compiled natively on each platform. Cross-compilation is when you compile the binaries on one platform for a different platform.
Not at all, cross platform means executing the same application in many platforms, regardless of the hardware and OS specific features of each platform.
Cross-compilation is useless if you don't actually get to executed the created binaries in the target platform.
Now, how do you intend to compile from GNU/Linux into z/OS, so that we can execute the generated binary out from the C compiler ingesting the code written in GNU/Linux platform, in the z/OS language environment inside an enclave, not configured in POSIX mode?
Using z/OS, if you're feeling more modern, it can be UWP sandboxed application with identity in Windows.
Hashicorp's Vault go binary is a whopping 512Mb beast. Recently considered using its agent mode to grab secrets for applications in containers but the size of the layer it adds is unviably big. And they don't seem interested into making a split server/client binary either...
I tink the title is a bit misleading. This is about very low level metrics collection from the system which by definition is very system dependent. The term “portable” in a programming language usually means portability for applications but this more portability of utilities.
Expecting a portable house and a portable speaker to have the same definition of portable is unfair.
You hit this real quick when trying to build container images from the scratch. Theoretically you can drop a Go binary into a blank rootfs and it will run. This works most of the time, but anything that depends on Go's Postgres client requires libpq which requires libc. Queue EFILE runtime errors after running the container.
> anything that depends on Go's Postgres client requires libpq which requires libc
Try https://github.com/lib/pq
I've also seen https://github.com/jackc/pgx used in many projects
> For users that require new features or reliable resolution of reported bugs, we recommend using pgx which is under active development.
If you really need a portable binary that uses shared libraries I would recommend building it with nix, you get all the dependencies including dynamic linker and glibc.
This article reminds me of the days before LLMs ruled the world, when the word "agent" was most commonly used in the DevOps area, representing the program that ran on a remote machine to execute dispatched jobs or send metrics. Now I wonder how many developers would look at "agent" and think of this meaning.
Use dlopen? I haven’t tried this in Go, but if you want a binary that optionally includes features from an external library, you want to use dlopen to load it.
It only works in a dynamically-linked binary, because the dynamic linker needs to be loaded.
more like C is portable, until it isn't
FWIW I maintain an official implementation of the journal wire format in go now.
https://github.com/systemd/slog-journal so you can at least log to the journal now without CGO
But that's just the journal Wire format which is a lot simpler than the disk format.
I think a journal disk format parser in go would be a neat addition
I've got a pure Go journald file writer that works to some extent—it doesn't split, compress, etc, but it produces journal files that journalctl/sdjournal can read, concurrently. Only stress tested by running a bunch of parallel integration tests, will most likely not maintain it seriously, total newbie garbage, etc, but may be of interest to someone. I haven't really seen any other working journald file writers.
https://github.com/lessrest/swash/tree/main/pkg/journalfile
Interesting that it uses the C API to collect journals. I would’ve thought to just invoke journalctl CLI. On platforms like macOS where the CLI doesn’t exist it’s an error when you exec, not a build time error.
That's also what gopsutils does, IIRC: it tries to look up process information with kernel APIs but can fall back to invoking /usr/bin/ps (which is setuid root on most systems) at the cost of being much less performant.
That's really not such a weird choice. The systemd library is pervasive and compatible.
The weird bit is the analysis[1], which complains that a Go binary doesn't run on Alpine Linux, a system which is explicitly and intentionally (also IMHO ridiculously, but that's editorializing) binary-incompatible with the stable Linux C ABI as it's existed for almost three decades now. It's really no more "Linux" than is Android, for the same reason, and you don't complain that your Go binaries don't run there.
[1] I'll just skip without explaination how weird it was to see the author complain that the build breaks because they can't get systemd log output on... a mac.
The macOS bit wasn’t about trying to get systemd logs on mac. The issue was that the build itself fails because libsystemd-dev isn’t available. We (naively) expected journal support to be something that we can detect and handle at runtime.
Well... yeah. It's a Linux API for a Linux feature only available on Linux systems. If you use a platform-specific API on a multiplatform project, the portability work falls on you. Do you expect to be able to run your Swift UI on Windows? Same thing!
I did this a while ago but it only reads journal files sequentially and I didn't implement the needed stuff to use the indexes.
https://github.com/appgate/journaldreader
Cross-compiling doesn't work because you're not defining your dependencies correctly and relying on the existence of things like system libraries and libc. Use `zig cc` with Go which will let you compile against a stub Glibc, or go all the way and use a hermetic build system (you should do this always anyhow).
The portability story for Go is awful. I've blogged about this before: https://blog.habets.se/2022/02/Go-programs-are-not-portable....
It's yet another example of Go authors just implementing the least-effort without even a slight thought to what it would mean down the line, creating a huge liability/debt forever in the language.
I’ve had some success using Zig for cross compiling when CGO is required.
There's still some bugs when interacting with gold and cross-compiling to linux/arm64 but fixable with some workarounds...
That's Uber's approach, right?
Is Uber using Zig for other things by now?
Go was never truly portable on Linux unfortunately due to its dependency on libc for DNS and user name resolution (because of PAM and other C-only API). Sure, pure Go implementation exists, but it doesn't cover all cases, so, in order to build a "good" binary for Linux you still needed to build the binary on (oldest supported) Linux distro.
If your production doesn't have any weird PAM or DNS then you can indeed just cross-compile everything and it works
This seems to imply that Go's binaries are otherwise compatible with multiple platforms like amd64 and arm64, other than the issue with linking dynamic libraries.
I suspect that's not true either even if it might be technically possible to achieve it through some trickery (and why not risc-v, and other architectures too?).
Of course you still need one binary per CPU architecture. But when you rely on a dynamic link, you need to build from the same architecture as the target system. At that point cross-compiling stops being reliable.
I am complaining about the language (phrasing) used: a Python, TypeScript or Java program might be truly portable across architectures too.
Since architectures are only brought up in relation to dynamic libraries, it implied it is otherwise as portable as above languages.
With that out of the way, it seems like a small thing for the Go build system if it's already doing cross compilation (and thus has understanding of foreign architectures and executable formats). I am guessing it just hasn't been done and is not a big lift, so perhaps look into it yourself?
they're only portable if you don't count the architecture specific runtime that you need to somehow obtain...
go doesn't require dynamic linking for C, if you can figure out the right C compiler flags you can cross compile statically linked go+c binaries as well.
Is it some tooling issue? Why is is an issue to cross-compile programs with dynamic linking?
It's a tooling issue. No one has done the work to make things work as smoothly as they could.
Traditionally, cross-compilers generally didn't even work the way that the Zig and Go toolchains approach it—achieving cross-compilation could be expected to be a much more trying process. The Zig folks and the Go folks broke with tradition by choosing to architect their compilers more sensibly for the 21st century, but the effects of the older convention remains.
In general, cross compilers can do dynamic linking.
In my experience, the cross-compiler will refuse to link against shared libraries that "don't exist", which they usually don't in a cross compiler setup (e.g. cross compiling an aarch64 application that uses SDL on a ppc64le host with ppc64le SDL libraries)
The usual workaround, I think, is to use dlopen/dlsym from within the program. This is how the Nim language handles libraries in the general case: at compile time, C imports are converted into a block of dlopen/dl* calls, with compiler options for indicating some (or all) libraries should be passed to the linker instead, either for static or dynamic linking.
Alternatively I think you could "trick" the linker with a stub library just containing the symbol names it wants, but never tried that.
You just need a compiler & linker that understand the target + image format, and a sysroot for the target. I've cross compiled from Linux x86 clang/lld to macOS arm64, all it took was the target SDK & a couple of env vars.
Clang knows C, lld knows macho, and the SDK knows the target libraries.
I happily and reliably cross build Go code that uses CGO and generate static binaries on amd64 for arm64.
For a single binary that will actually run across both architectures, see <https://cosmo.zip/>.
Original discussion: <https://news.ycombinator.com/item?id=24256883>.
This is an (organizational) tooling problem, not a language problem - and is no less complicated when musl libc enters the discussion.
The conclusion of the article says that it's not the language problem either. Under the title "So, is Go the problem?" Or do you mean something else here?
Given that the title implies the opposite, I think it's a fair criticism. Pointing out clickbait might be tedious, but not more so than clickbait itself.
Systemd. Binary logs are wonderful aren't they?
It's not that hard to read them without linking their library. The format is explained on their documentation.
https://github.com/appgate/journaldreader
There's no such thing as a portable application; only programs limited enough to be lucky not to conflict with the vagaries of different systems.
That said, in my personal experience, the most portable programs tend to be written in either Perl or Shell. The former has a crap-ton of portability documentation and design influence, and the latter is designed to work from 40 year old machines up to today's. You can learn a lot by studying old things.
Well, that was pretty obvious that the portability is gone, especially when you start linking into systemd, even on the host system you have to link with the shared libs into systemd, you cannot link statically.
This stuff is out of my frame of reference. I've never used Go before and have never had the need to go this low level (C APIs, etc); so please keep this in mind with my following questions, which are likely to sound stupid or ignorant.
Can this binary not include compiled dependacies along side it? I'm thinking like how on windows for portable apps they include the DLLs and other dependant exes in subfolders?
Out of interest, and in relation to a less well liked Google technology, could dart produce what they are after? My understanding is dart can produce static binaries, though I'm not sure if these are truly portable compile once run everywhere sense.
i wonder, for their use case, why not just submit journal in binary format to the server and let the serve do the parsing?
It's crucial to be able to do some processing locally to filter out sensitive/noisey logging sources.
so like every other language
Go is portable until you have to deploy on AS/400
Well now you've gone and linked to a fascinating tool which I'm going to have to dive into and learn: https://kaitai.io/
Thanks.
Cgo is terrible, but if you just want some simple C calls from a library, you can use https://github.com/ebitengine/purego to generate the bindings.
It is a bit cursed, but works pretty well. I'm using it in my hardware-backed KMIP server to interface with PKCS11.
From the article:
> In the observability world, if you're building an agent for metrics and logs, you're probably writing it in Go.
I'm pretty unconvinced that this is the case unless you happen to be on the CNCF train. Personally I'd write in Rust these days, C used to be very common too.