> When the new configuration finally builds, more ofen than not some component randomly stops working after reboot
Good thing you're using NixOS where rolling back to a working version is as simple as a reboot, instead of Arch Linux where your options are:
1. If it's a boot issue, dig out a live USB for recovery
2. If it's a package, try rolling back piecemeal by installing old versions from /var/cache/pacman
3. Once rolling back piecemeal breaks your system more because of the mixing of package versions, pin your packages to a specific day in the arch rollback machine and pray your whole system downgrades cleanly
I used to live that life. After a couple failed downgrades I started exclusively using the arch rollback machine for my work machine so nothing would update and break before I had a chance to test the updates on my personal machine.
> Huge update sizes
Yeah, the optimize store option to replace duplicate files with hardlinks really should be a default, and automatic nix garbage collection should probably be put in the initial automatically-generated config so new users are aware of it. Also `nix-collect-garbage -d` behaving differently from `sudo nix-collect-garbage -d` despite being a trusted-users was quite the surprise.
I used to be able to brush this off by "storage is cheap" but AI has driven up the cost of SSDs.
> In contrast, Arch Linux simply downloads prebuilt binaries via pacman or an AUR helper
That is also the case on nix. If you are compiling software yourself, then you must have changed some settings to cause it to build from source, which is not even an option on Arch Linux. Nix having the completely optional ability to customize your packages is a positive, not a flaw.
> why not use Gentoo Linux instead
On nix my entire system is defined in my nix config, including all my settings, all my scripts, and all my packages. I use impermanence to wipe my disk on every boot except for the a handful of folders that I tell it to preserve across boots. The combination of those two creates a deterministic system. Nix is so much more than just being able to compile your software.
>Good thing you're using NixOS where rolling back to a working version is as simple as a reboot, instead of Arch
I feel like Nix user always leave out that you can simply use a filesystem with snapshots like btrfs or ZFS and gain the same resilience against possible breakage and even beyond just update issues.
So I do run ZFS with automatic snapshots through zrepl, but I think a shortcoming of relying on that on Arch Linux is your user data. If the issue is immediately apparent after an update, then you absolutely can just roll back and be fine. But if it takes you a week before you discover something isn't working, then rolling back isn't as straight-forward because you're potentially losing any modification to your files made in the past week. You could partially mitigate this by storing your $HOME on a separate dataset, but not everything ends up in $HOME (for example, everything related to docker, your secureboot stuff, your bluetooth pairings, your wifi networks, lvfs, journald, your systemd timer statefiles)
On NixOS my user files are completely separate from my programs/system files. I can roll one back without impacting the other. This is further guaranteed by impermanence which wipes my disk on every boot except for the folders I specify should be preserved, all of which end up storing their data under a separate ZFS dataset through bind mounts. So programs/system files are read-only under /nix and my user data is under /persist and bind-mounted to wherever I need them. So I am 100% certain that none of my user data is comingled in the same dataset as my programs/system files.
I think this is a mostly fair criticism of nixos. Nixos has a lot of powerful tools, but if you don't need them, they can get in the way. Some assorted notes:
> the constant cycle of rebuild → fix → rebuild → fix → rebuild
I've found this useful to eliminate the rebuild loop:
https://kokada.dev/blog/quick-bits-realise-nix-symlinks/
It lets you make the config of the program you choose a regular mutable file instead of a symlink so you can quickly iterate and test changes.
> In contrast, Arch Linux simply downloads prebuilt binaries via pacman or an AUR helper
If a binary exists. A lot of AUR packages I used to rely on didn't have a binary package (or the binary package was out of date) and would have to build from source. On nixos my machines are set up to use distributed builds (https://wiki.nixos.org/wiki/Distributed_build). Packages that do need built from source get built on my server downstairs. The server also runs a reverse proxy cache so I only need to download packages once per version.
Distributed AUR builds are possible on arch, but they require a lot of setup and are still fragile like regular AUR builds, your only choice of dependencies are what's currently available in the repos.
> On my machine, regular maintenance updates without proper caching easily take 4–5+ hours
It sounds like the author may be running the unstable release channel and/or using some heavy unstable packages. Which might explain a lot of other problems the author is having too.
Back when I used arch, I found that as time went on, my system would sort of accumulate packages. I would install $package, then in the next version of $package a dependency would be added on $dep. When I updated, $dep would be installed, then eventually $package would drop the dependency on $dep, but $dep would remain installed. I would periodically have to run pacman -R $(pacman -Qtqd | tr '\n' ' ') to clear out packages that were no longer required.
In using your server as a proxy cache, do you just include the server as a nix cache substitutor, or simply use a MITM approach using something like squid proxy?
If the former, via substitutor (or if also using a remote builder), how do you manage when moving portable clients outside your LAN? E.g. traveling with your laptop? Do you tunnel back home, have a toggle to change substitutor priorities?
I find it the default timeout for unresponsive substituters excessively long, as well as the repeated retries for each requested derivation annoying, rather than it recalling and skipping unresponsive substituters for subsequent derivations in the same nix switch/build invocation.
I've been using NixOS for the last year and have loved it. Rebuilding doesn't fail for me nearly as often as it does for the author. Running nix garbage collection frequently isn't a big deal and can easily be automated. The network usage is a fair point. But IMO a small price to pay for keeping all of my devices perfectly in sync and running into weird "works on my machine" issues way less frequently.
My rebuilds fail all the time, but that's self-inflicted. I build my whole system from source with the nix binary cache disabled and optimizations for my processor enabled, so it seems like every update I have multiple failures which range from transient issues like GNU's savannah being down to persistent issues like software suddenly not compiling with `-march znver4`.
Reading the blog, they complain about compile times, which makes me wonder if their issues are similarly self-inflicted. The average user shouldn't need to compile software due to the nix binary cache. So they must have been either modifying packages or enabling optimizations.
The sad thing about this read is that none of the criticisms are necessarily leveled as Nix as a system, but are purely about how the ecosystem is managed.
It's possible I have no idea what I'm talking about, but my understanding is that nixos relies on fetching things from third party URLs which may simply die. I feel a bit misled by the promises of nixos, because I cannot actually take the configuration files in 10 years and setup the system again due to link rot.
I was also under the impression that I could install DE's side by side on nixos and not have things like one DE conflicting with files from another DE, but this apparently isn't true either - I installed KDE, and then installed Sway and Sway overwrote the notification theming for KDE.
NixOS is very impressive but the marketing around it feels misleading. The reproducible claim needs a giant asterisk due to link rot.
Every system and package manager will be affected if it cannot download source code to build a package.
NixOS less so, because pretty much all source downloads that are not restricted by license are a separate output that will therefore be stored on (and downloadable from) NixOS cache servers.
I'm not sure what your expectation for this is in general, nobody can just wish into existence data that is just gone.
> NixOS is very impressive but the marketing around it feels misleading. The reproducible claim needs a giant asterisk due to link rot.
It's a valid concern, though perhaps worth mentioning you will be able to restore your 10-year old config as long as the files downloaded from now-broken links are still in the Nix cache. Of course in practice, this is only useful to large organizations that have resources to invest in bespoke infrastructure to ensure supply chain integrity, since any `nix store gc` run will immediately wipe all downloads :(
While I too love the promise of Nix, I never was able to handle the configuration of nix. And at this point I've given up on it.
What I wonder is if there is a medium between Nix's build everything from the ground up, and the traditional get everything delivered. I've been excited about the Fedora Atomic collections, silverblue etc., as it gives me a lot of the roll back and isolation of nix with the workflow closer to a regular fedora or arch. I'm not saying that everything is roses, but I've been able to use it more than I ever got out of a nix.
What would you say NixOS config starters are missing? I'm teetering on abandoning Apple unless they can right the UX ship, and I was hoping to build on one of those when I switch.
Is there a NixOS tool that can retroactively record what I did and translate the changes into the config file or do I have to write down each and every change manually again and again?
It’s generally best to make the change in your configuration.nix or flake.nix rather than with the imperative tools. Then you just version control that file (or files if you break it up)
Loved the idea of it but hated that there are multiple ways of doing things. In the main config , in flakes and one other way I forgot rn. Very beginner hostile because whatever problem you’re trying to solve if you google it you likely get the wrong flavour of answer. A bit like trying to solve a kde issue and the results are all gnome
I’ll probably give it another go at some point because I could see the vision but that above really dented my enthusiasm.
I wish Arch could learn some lessons from NixOS packaging. One thing that really bothers me about Arch is how many pain points there are in the packaging tooling. Furthermore, I wish AUR packagers used utilities like namcap and chroot building to check their packages before pushing their slop onto the AUR; whenever I use new software from the AUR, I check the PKGBUILD to see how well it was made.
Compared to other Linux distribution's package tooling Arch's is pretty nice and painless, I think.
Agreed with namcap/chroot - I think there should be even more mandatory checks on pushing stuff to AUR. But even so - regarding your last point: you absolutely need to check all PKGBUILDs from AUR or potentially get malware.
Everything I've heard about NixOS...well, it sucks. Sorry.
The ideas behind it are interesting. The reality of living with it, the benefits are basically minimal/none and you have to learn a whole big thing that isn't applicable anywhere else.
Sounds like you've never used it. I've daily driven it for ~2 years and would never go back
It works great with containers, you can use Nix to build extremely lean OCI images. Mercury uses it this way- the book NixOS in production discusses it
Just seems like it’s a solution for a problem I don’t have.
I have to learn a whole new language that’s only used for NixOS just to do things I already have no difficulty doing with existing tooling.
I enthusiastically say, no, I’ve never used it, because, like I said, having that kind of learning curve just to set up an OS is kind of insane. Doesn’t matter if it’s the greatest thing since sliced bread.
> When the new configuration finally builds, more ofen than not some component randomly stops working after reboot
Good thing you're using NixOS where rolling back to a working version is as simple as a reboot, instead of Arch Linux where your options are:
I used to live that life. After a couple failed downgrades I started exclusively using the arch rollback machine for my work machine so nothing would update and break before I had a chance to test the updates on my personal machine.> Huge update sizes
Yeah, the optimize store option to replace duplicate files with hardlinks really should be a default, and automatic nix garbage collection should probably be put in the initial automatically-generated config so new users are aware of it. Also `nix-collect-garbage -d` behaving differently from `sudo nix-collect-garbage -d` despite being a trusted-users was quite the surprise.
I used to be able to brush this off by "storage is cheap" but AI has driven up the cost of SSDs.
> In contrast, Arch Linux simply downloads prebuilt binaries via pacman or an AUR helper
That is also the case on nix. If you are compiling software yourself, then you must have changed some settings to cause it to build from source, which is not even an option on Arch Linux. Nix having the completely optional ability to customize your packages is a positive, not a flaw.
> why not use Gentoo Linux instead
On nix my entire system is defined in my nix config, including all my settings, all my scripts, and all my packages. I use impermanence to wipe my disk on every boot except for the a handful of folders that I tell it to preserve across boots. The combination of those two creates a deterministic system. Nix is so much more than just being able to compile your software.
>Good thing you're using NixOS where rolling back to a working version is as simple as a reboot, instead of Arch
I feel like Nix user always leave out that you can simply use a filesystem with snapshots like btrfs or ZFS and gain the same resilience against possible breakage and even beyond just update issues.
So I do run ZFS with automatic snapshots through zrepl, but I think a shortcoming of relying on that on Arch Linux is your user data. If the issue is immediately apparent after an update, then you absolutely can just roll back and be fine. But if it takes you a week before you discover something isn't working, then rolling back isn't as straight-forward because you're potentially losing any modification to your files made in the past week. You could partially mitigate this by storing your $HOME on a separate dataset, but not everything ends up in $HOME (for example, everything related to docker, your secureboot stuff, your bluetooth pairings, your wifi networks, lvfs, journald, your systemd timer statefiles)
On NixOS my user files are completely separate from my programs/system files. I can roll one back without impacting the other. This is further guaranteed by impermanence which wipes my disk on every boot except for the folders I specify should be preserved, all of which end up storing their data under a separate ZFS dataset through bind mounts. So programs/system files are read-only under /nix and my user data is under /persist and bind-mounted to wherever I need them. So I am 100% certain that none of my user data is comingled in the same dataset as my programs/system files.
Yeah if my arch breaks, i just reboot into a snapshot. It's not a problem, really.
I think this is a mostly fair criticism of nixos. Nixos has a lot of powerful tools, but if you don't need them, they can get in the way. Some assorted notes:
> unless you run nix-collect-garbage periodically
> the constant cycle of rebuild → fix → rebuild → fix → rebuildI've found this useful to eliminate the rebuild loop: https://kokada.dev/blog/quick-bits-realise-nix-symlinks/ It lets you make the config of the program you choose a regular mutable file instead of a symlink so you can quickly iterate and test changes.
> In contrast, Arch Linux simply downloads prebuilt binaries via pacman or an AUR helper
If a binary exists. A lot of AUR packages I used to rely on didn't have a binary package (or the binary package was out of date) and would have to build from source. On nixos my machines are set up to use distributed builds (https://wiki.nixos.org/wiki/Distributed_build). Packages that do need built from source get built on my server downstairs. The server also runs a reverse proxy cache so I only need to download packages once per version.
Distributed AUR builds are possible on arch, but they require a lot of setup and are still fragile like regular AUR builds, your only choice of dependencies are what's currently available in the repos.
> On my machine, regular maintenance updates without proper caching easily take 4–5+ hours
It sounds like the author may be running the unstable release channel and/or using some heavy unstable packages. Which might explain a lot of other problems the author is having too.
Back when I used arch, I found that as time went on, my system would sort of accumulate packages. I would install $package, then in the next version of $package a dependency would be added on $dep. When I updated, $dep would be installed, then eventually $package would drop the dependency on $dep, but $dep would remain installed. I would periodically have to run pacman -R $(pacman -Qtqd | tr '\n' ' ') to clear out packages that were no longer required.
In using your server as a proxy cache, do you just include the server as a nix cache substitutor, or simply use a MITM approach using something like squid proxy?
If the former, via substitutor (or if also using a remote builder), how do you manage when moving portable clients outside your LAN? E.g. traveling with your laptop? Do you tunnel back home, have a toggle to change substitutor priorities?
I find it the default timeout for unresponsive substituters excessively long, as well as the repeated retries for each requested derivation annoying, rather than it recalling and skipping unresponsive substituters for subsequent derivations in the same nix switch/build invocation.
I've been using NixOS for the last year and have loved it. Rebuilding doesn't fail for me nearly as often as it does for the author. Running nix garbage collection frequently isn't a big deal and can easily be automated. The network usage is a fair point. But IMO a small price to pay for keeping all of my devices perfectly in sync and running into weird "works on my machine" issues way less frequently.
My rebuilds fail all the time, but that's self-inflicted. I build my whole system from source with the nix binary cache disabled and optimizations for my processor enabled, so it seems like every update I have multiple failures which range from transient issues like GNU's savannah being down to persistent issues like software suddenly not compiling with `-march znver4`.
Reading the blog, they complain about compile times, which makes me wonder if their issues are similarly self-inflicted. The average user shouldn't need to compile software due to the nix binary cache. So they must have been either modifying packages or enabling optimizations.
The sad thing about this read is that none of the criticisms are necessarily leveled as Nix as a system, but are purely about how the ecosystem is managed.
It's unfair to act like NixOS just breaks "randomly" or is inherently unreliable
It's essentially deterministic and fully reproducible.
Issues with Bluetooth, electron etc as described are essentially irrelevant to NixOS and have to do with your configuration
It's possible I have no idea what I'm talking about, but my understanding is that nixos relies on fetching things from third party URLs which may simply die. I feel a bit misled by the promises of nixos, because I cannot actually take the configuration files in 10 years and setup the system again due to link rot.
I was also under the impression that I could install DE's side by side on nixos and not have things like one DE conflicting with files from another DE, but this apparently isn't true either - I installed KDE, and then installed Sway and Sway overwrote the notification theming for KDE.
NixOS is very impressive but the marketing around it feels misleading. The reproducible claim needs a giant asterisk due to link rot.
Every system and package manager will be affected if it cannot download source code to build a package.
NixOS less so, because pretty much all source downloads that are not restricted by license are a separate output that will therefore be stored on (and downloadable from) NixOS cache servers.
I'm not sure what your expectation for this is in general, nobody can just wish into existence data that is just gone.
> NixOS is very impressive but the marketing around it feels misleading. The reproducible claim needs a giant asterisk due to link rot.
It's a valid concern, though perhaps worth mentioning you will be able to restore your 10-year old config as long as the files downloaded from now-broken links are still in the Nix cache. Of course in practice, this is only useful to large organizations that have resources to invest in bespoke infrastructure to ensure supply chain integrity, since any `nix store gc` run will immediately wipe all downloads :(
Imagine a program with a compile error, nicely wrapped into reproducible build system (like a docker with tools or nix).
It is deterministic - compilation fails at the same line every time.
It is fully reproducible - anyone can get the same compilation error.
And yet it is not reliable at all, it's completely broken.
While I too love the promise of Nix, I never was able to handle the configuration of nix. And at this point I've given up on it.
What I wonder is if there is a medium between Nix's build everything from the ground up, and the traditional get everything delivered. I've been excited about the Fedora Atomic collections, silverblue etc., as it gives me a lot of the roll back and isolation of nix with the workflow closer to a regular fedora or arch. I'm not saying that everything is roses, but I've been able to use it more than I ever got out of a nix.
What would you say NixOS config starters are missing? I'm teetering on abandoning Apple unless they can right the UX ship, and I was hoping to build on one of those when I switch.
Is there a NixOS tool that can retroactively record what I did and translate the changes into the config file or do I have to write down each and every change manually again and again?
It’s generally best to make the change in your configuration.nix or flake.nix rather than with the imperative tools. Then you just version control that file (or files if you break it up)
Loved the idea of it but hated that there are multiple ways of doing things. In the main config , in flakes and one other way I forgot rn. Very beginner hostile because whatever problem you’re trying to solve if you google it you likely get the wrong flavour of answer. A bit like trying to solve a kde issue and the results are all gnome
I’ll probably give it another go at some point because I could see the vision but that above really dented my enthusiasm.
I wish Arch could learn some lessons from NixOS packaging. One thing that really bothers me about Arch is how many pain points there are in the packaging tooling. Furthermore, I wish AUR packagers used utilities like namcap and chroot building to check their packages before pushing their slop onto the AUR; whenever I use new software from the AUR, I check the PKGBUILD to see how well it was made.
Compared to other Linux distribution's package tooling Arch's is pretty nice and painless, I think.
Agreed with namcap/chroot - I think there should be even more mandatory checks on pushing stuff to AUR. But even so - regarding your last point: you absolutely need to check all PKGBUILDs from AUR or potentially get malware.
https://bertptrs.nl/2026/01/30/how-to-review-an-aur-package.... is a nice recent article by one of the maintainers that follows up on last year's AUR malware.
The final point sums it up, though: the AUR was built without the security mechanisms - technical and social - we want and need today.
Counterpoint, I have been using Nix for several months and have no problems.
I use LLM's ALOT for the config file which is in a custom nix DSL. I couldn't imagine how long it would take before LLM's.
But with an LLM I'm pretty happy with nix.
Everything I've heard about NixOS...well, it sucks. Sorry.
The ideas behind it are interesting. The reality of living with it, the benefits are basically minimal/none and you have to learn a whole big thing that isn't applicable anywhere else.
In the container era it makes even less sense.
> the benefits are basically minimal/none
Sounds like you've never used it. I've daily driven it for ~2 years and would never go back
It works great with containers, you can use Nix to build extremely lean OCI images. Mercury uses it this way- the book NixOS in production discusses it
Just seems like it’s a solution for a problem I don’t have.
I have to learn a whole new language that’s only used for NixOS just to do things I already have no difficulty doing with existing tooling.
I enthusiastically say, no, I’ve never used it, because, like I said, having that kind of learning curve just to set up an OS is kind of insane. Doesn’t matter if it’s the greatest thing since sliced bread.