94 comments

  • nine_k a day ago

    I would like a comparison with runit, which is a very minimal but almost full-fledged init system. I see many similarities: control directories, no declarative dependencies, a similar set of scripts, the same approach to logging. The page mentions runit in passing, and even suggests using the chpst utility from it.

    One contrasting feature is parametrized services: several similar processes (like agetty) can be controlled by one service directory; I find it neat.

    Another difference is the ability to initiate reboot or shutdown as an action of the same binary (nitroctl).

    Also, it's a single binary; runit has several.

    • J_McQuade 21 hours ago

      Last year I decommed our last couple of servers that ran processes configured using runit. It was a sad day. I first learned to write runit services probably about 15 years ago and it was very cool and very understandable and I kind of just thought that's how services worked on linux.

      Then I left Linux for about 5 years and, by the time I got back, Systemd had taken over. I heard a few bad things about it, but eventually learned to recognise that so many of those arguments were in such bad faith that I don't even know what the real ones are any more. Currently I run a couple of services on Pi Zeros streaming camera and temperature data from the vivarium of our bearded dragon, and it was so very easy to set them up using systemd. And I could use it to run emacsd on my main OpenSuse desktop. And a google-drive Fuse solution on my work laptop. "having something standard is good, actually", I guess.

      • rendaw 9 hours ago

        I made a process supervisor, probably less simple than nitro but much more simple (and focused) than systemd.

        Aside from the overreach, I think there are some legitimate issues with systemd:

        - It's really hard to make services reliable. There are all sorts of events in systemd which will cause something to turn off and then just stay off.

        - It doesn't really help that the things you tell it to do (start/stop this service) use the same memory bits as when some dependency turns something on.

        - All the commands have custom, nonstandard outputs, mostly for human consumption. This makes it really hard to interface with (reliably) if you need to write tooling around systemd. Ini files are not standardized, especially systemd's.

        - The two way (requires, requiredby) dependencies make the control graph really hard to get a big picture of

        FWIW here's mine, where I wrote a bit more about the issues: https://github.com/andrewbaxter/puteron/

      • nine_k 19 hours ago

        The backlash against systemd was twofold. On one hand, when released and thrust upon distros via Gnome, it was quite rough around the edges, which caused both real problems and just understandable irritation. Fifteen years after, the kinks are ironed out, but it was sort of a long time. (Btrfs, released at about the same time, took even longer to stop being imprudent to use in production.)

        On the other hand, systemd replaces Unix (sort of like Hurd, but differently). It grabs system init, logging, authentication, DNS, session management, cron, daemon monitoring, socket activation, running containers, etc. In an ideal Red Hat world, I suppose, a bare-metal box should contain a kernel, systemd, podman, IP tools, and maybe sshd and busybox. This is a very anti-Unix, mainframe-like approach, but for a big consulting firm, like Red Hat / IBM, it is very attractive.

        • mickeyp 18 hours ago

          All this lather about doing it the UNIX way, whilst neglecting to point out that the old tooling was far worse. "Do one thing well" implies it was done well to begin with.

          DNS: Can you from memory recite how name lookups work on Linux? Ever had t otrack down problems with non-standard setups? `resolvectl` is not perfect, but it does let you control all of this stuff in one place, and with a nice, orderly view of what does what.

          Init system: ever written the old sysV ones from scratch? Sure they're just shell script, but did you remember to make yours re-entrant? What about forking or master-slave processes? Hope you got your pid checking code just right...

          Containers: Docker is quite robust nowadays but it's not like it follows the 'UNIX philosophy' either. And systemd/nspawn at least lets you do namespacing and cgroups reasonably well and in a straightforward way...

          Mounts, etc.: let's not get into fstab and its idiosyncrasies.

          Logging: let's hope you set up logrotate properly or you're in for a surprise.

          And on it goes.

          Systemd is not perfect. But what it replaces was god-awful and far worse.

          • overfeed 6 hours ago

            > DNS: Can you from memory recite how name lookups work on Linux

            Yes, I can, and I use systemd only because it's the default on debian, I have no reason to try devuan yet.

            > ever written the old sysV ones from scratch?

            Many, many times, and I was only an enthusiast/user, not a sysadmin.

            > did you remember to make yours re-entrant?

            Dealing with PID files was only mildly annoying. Init scripts were very boilerplate-y, so I wouldn't forget anything after my copy-paste-edit-delete unnecessary parts cycle. In a single afternoon, one could bash out an CLI init-script generator that uses jinja2 templates and interactively asking <10 questions about the service.

            > Systemd is not perfect. But what it replaces was god-awful and far worse.

            Init systems shouldn't have anything to do with managing container lifecycles beyond managing the container-runner service using the usual unix interface (signals). Call me a purist, but system services shouldn't be containerized.

            An init system shouldn't be managing DNS or logging either, those should be standalone components. If they are problematic, there should be composable, domain-specific tools that solve them, instead of smooshing everything into systemd.

            SystemD wasn't the only possible way to solve those logging, DNS, or security policy problems, and I'm glad other PID 1 projects that focus on being init systems are thriving.

          • regularfry 7 hours ago

            So one could agree that something should be done, in each of these cases. But that doesn't imply that the thing to be done should have been systemd, or even systemd-shaped. But no, it has borged the lot.

        • jcgl 15 hours ago

          No, systemd absolutely does not replace Unix.

          Systemd-the-project and systemd-the-service-manager (“init”) are two different things. The former is a project with numerous components (e.g. resolved) that actually _are_ rather modular; they usually require systemd-the-service-manager, but you (or your distro) can generally pick and choose the components you want.

          The service manager does indeed require some components to be gobbled up (udev comes to mind). But subsuming other subsystems shouldn’t be so anathema; the systemd people didn’t just think that “the one” thing of the Unix philosophy wasn’t being done well. Rather, the idea is that is was the wrong thing, i.e. classic Unix init was a tool operating at the wrong layer of abstraction. And in their eyes, a modern system needs a richer set of userspace primitives. So they made engineering decisions in pursuit of that goal.

        • gf000 9 hours ago

          You are repeating a bunch of "talking points" common among systemd-critics, but are not really backed up.

          First of all, it wasn't "thrust upon" anyone, it was democratically selected multiple times in a ranked voting setup in case of Debian, and independently by Arch as well. It was simply because maintainers were fed up with the absolutely unmaintainable mess that predated systemd -- it seems random-ass bash scripts are not suitable for such a complex problem as booting up a system, and doing it properly is much better.

          Logging sucked great time before, e.g. you didn't even get logs from before the Linux kernel is started - systemd moves it to a single place. And if you are for some reason irritated by binary logging, you can just freely pipe it to text logs.

          Authentication is not done by systemd, are you thinking of pam modules? The network service is not systemd, just runs under the same group's name - KDE file browser is also different from their terminal. Also, it's not mandatory to use. Logind is not systemd itself, again. Scheduling services makes absolute sense for systemd's problem domain, so do monitoring and socket activation.

          You need some kind of order to build stuff on, the Unix philosophy is more of a feel good convention than a real design guideline (and it doesn't apply in many cases)

        • jlarocco 6 hours ago

          > systemd replaces Unix

          This is the over the top hyperbole the OP was talking about. Even if systemd did "replace Unix", I don't know why anybody should care.

          As a long time Linux user it's clear that Systemd took over because it's better. The old way of doing things was a complicated mess that had evolved over decades, and was difficult to use and understand, with lots of weird interactions and no consistency.

          Having a standard way to do admin tasks across all of the distros is valuable and makes Linux easier to use and more reliable.

        • adwn 18 hours ago

          Is following the "Unix way" a terminal value? I.e., is it desirable for itself, or is it just supposed to be a means to an end?

          In discussions such as these, the Unix philosophy of "do one thing and do it well" is often being touted as a proxy for (and a necessary attribute of) "good design", as if all possible wisdom about the future of computing was available to the creators of UNIX in 1969.

          • overfeed 6 hours ago

            > is it desirable for itself, or is it just supposed to be a means to an end?

            It's a means to multiple, desirable ends: first, is that it establishes an interface, which makes developing tooling easier.

            Downstream of well-defined interfaces is that it makes the individual components replaceable - so I can replace the default tool with one written in rust, or a monobinary like BusyBox and everything still works - I doubt the fathers if UNIX ever imagined the idea of BusyBox.

            If the individual components are replaceable, another desirable outcome is achieved: avoiding software monoculture, which is great for security and encourages innovation.

      • MortyWaves 15 hours ago

        The thing I don’t like about systemd is the inexplicable need to have multiple files for a service. Why can’t they all be declared in a single unit file?

        • broeng 14 hours ago

          What do you mean? They can be in a single service file.

          • MortyWaves 13 hours ago

            All the examples I see is there’s a network unit file, a cron unit file, etc all for one application. It would be nice to colocate.

            Then there is composition of multiple applications too.

            With docker compose I have a single file for running all my services.

            With systemd it has to be N number of files for each service too.

      • atoav 12 hours ago

        One of the main issues with systemd (as someone using it everywhere) is IMO that even experienced people can have a hard time understanding from which context a command is running.

        E.g. if you "just" want to automate a script that you were running from a terminal as a user, there can be a ton of problems and it is hard to figure them out ahead of time.

    • fbarthez 12 hours ago

      There's an appropriately minimal comparison with runit in her slides (PDF) from a talk she gave in 2024: https://leahneukirchen.org/talks/#nitroyetanotherinitsy

      • cout 12 hours ago

        What I got from looking at that comparison is that runit starts a separate supervisor process for each process started. I like the cleaner process tree of nitro, but I wonder what the tradeoffs are for each.

    • smartmic a day ago

      Leah Neukirchen is active member of the Void Linux community, I expect a lot of cross-pollination here. It would be really great if she could write up something how to use it for Void.

    • ethersteeds 19 hours ago

      > no declarative dependencies,

      Is that a selling point? Could you explain why?

      I've heard plenty of reasons why people find systemd distasteful as an init, but I've not heard much criticism of a declarative design.

      • petre 4 hours ago

        > Is that a selling point? Could you explain why?

        Because it's stupid easy? I just have to execute shell one liners and set environment variables, no need to read lenghty docs and do stuff the systemd way.

        We use runit to supervise our services. It's 100% reliable as opposed to systemd which sometimes fails in mysterious ways.

    • imiric a day ago

      I've gotten used to runit via Void Linux, and while it does the job of an init system, its UI and documentation leave something to be desired. The way logging is configured in particular was an exercise in frustration the last time I tried to set it up for a service.

      I wouldn't mind trying something else that is as simple, but has sane defaults, better documentation, and a more intuitive UI.

      • kccqzy 19 hours ago

        I like using systemd but it also doesn't have great documentation either. I often find myself unable to grok things by only reading the official documentation and I have to resort to reading forum posts, other people's blogposts or Stack Overflow. To me documentation isn't good enough until it doesn't need any third party material.

      • nine_k a day ago

        Logging in runit seems simple (I don't remember running into problems), but indeed, the documentation leaves much to be desired. Could be a good thing to contribute to Void Handbook.

      • cbzbc a day ago

        runit doesn't always take care of services it manages in the same way as a proper init . From the man page:

        "If runsvdir receives a TERM signal, it exits with 0 immediately"

        • Arch-TK a day ago

          This is by design.

          runsvdir receiving TERM should only happen when stage 2 is triggered to end.

          Once that happens, the individual runsv processes are still supervising their individual tasks and can be requested to stop through their respective control sockets. It's how standard stage 3 is implemented.

  • andrewstuart2 a day ago

    I'm always torn when I see anything mentioning running an init system in a container. On one hand, I guess it's good that it's designed with that use case in mind. Mainly, though, I've just seen too many overly complicated things attempted (on greenfield even) inside a single container when they should have instead been designed for kubernetes/cloud/whatever-they-run-on directly and more properly decoupled.

    It's probably just one of those "people are going to do it anyway" things. But I'm not sure if it's better to "do it better" and risk spreading the problem, or leave people with older solutions that fail harder.

    • bityard a day ago

      Yes, application containers should stick to the Unix philosophy of, "do one thing and do it well." But if the thing in your docker container forks for _any_ reason, you should have a real init on PID 1.

      • benreesman 18 hours ago

        There's nothing inherently wrong with containers in the abstract: virtualization is a critical tool in computer science (some might it's difficult to define computer science without a virtual machine). There's not even anything wrong with this "less than a new kernel, more than a new libc" neighborhood.

        The broken, ugly, malignant thing is this one godawful implementation Docker and its attic-dwelling Quasimodo cousin docker-compose.yml

        It's trivial to slot namespaces (or jails if you also like the finer things BSD) into a sane init system, process id regime, network interface regime: its an exercise in choosing good defaults for all the unshare-adjacent parameters.

        But a whole generation of SWEs memorized docker jank instead of Unix, and so now people are emotionally invested in it. You run compose to run docker to get Alpine and a node built on musl.

        You can just link node to musl. And if you want a chroot or a new tuntap scope? man unshare.

      • RulerOf 19 hours ago

        > you should have a real init on PID 1

        Got a handy list of those? My colleagues use supervisord and it kinda bugs me. Would love to know if it makes the list.

      • pas a day ago

        is there any issue besides the potential zombies? also, why can't the real pid1 do it? it sees all the processes after all.

        • MyOutfitIsVague a day ago

          Mostly just zombies and signal handlers.

          And your software can do it, if it's written with the assumption that it will be pid1, but most non-init software isn't. And rather than write your software to do so, it's easier to just reach for something like tini that does it already with very little overhead.

          I'd recommend reading the tini readme[0] and its linked discussion for full detail.

          [0]: https://github.com/krallin/tini

        • dathery a day ago

          The main other problem is that the kernel doesn't register default signal handlers for signals like SIGTERM if the process is PID 1. So if your process doesn't register its own signal handlers, it's hard to kill (you have to use SIGKILL). I'm sure anyone who has used Docker a lot has run into containers that seem to just ignore signals -- this is the usual reason why.

          > also, why can't the real pid1 do it? it sees all the processes after all.

          How would the real PID 1 know if it _should_ reap the zombie? It's normal to have some zombie processes -- they're just processes whose exit statuses haven't been reaped yet. If you force-reaped a zombie you could break a program that just hasn't yet gotten around to checking the status of a subprocess it spawned.

          • immibis 20 hours ago

            Processes only reap their direct children. Init is special because orphaned processes are reparented to init, which then has to reap them.

    • mikepurvis a day ago

      From my experience in the robotics space, a lot of containers start life as "this used to be a bare metal thing and then we moved it into a container", and with a lot of unstructured RPC going on between processes, there's little benefit in breaking up the processes into separate containers.

      Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.

      • palata a day ago

        My experience in the robotics space is that containers are a way to not know how to put a system together properly. It's the quick equivalent of "I install it on my Ubuntu, then I clone my whole system into a .iso and I call that a distribution". Most of the time distributed without any consideration for the open source licences being part of it.

        • mikepurvis a day ago

          I've always advocated against containers as a means of deploying software to robots simply because to my mind it doesn't make sense— robots are full of bare-metal concerns, whether it's udev rules, device drivers, network config, special kernel or bootloader setup, never mind managing the container runtime itself including startup, updating, credentials, and all the rest of it. It's always felt to me like by the time you put in place mechanisms to handle all that crap outside the container, you might as well just be building a custom bare metal image and shipping that— have A/B partitions so you copy an update from the network to the other partition, use grub chainloading, wipe hands on pants.

          The concern regarding license-adherence is orthogonal to all that but certainly valid. I think with the ROS ecosystem in particular there is a lot of "lol everything is BSD/Apache2 so we don't even have to think about it", without understanding that these licenses still have an attribution requirement.

          • westurner a day ago

            For workstations with GPUs and various kernel modules, rpm-ostree + GRUB + Native Containers for the rootfs and /usr and flatpaks etc on a different partition works well enough.

            ostree+grub could be much better at handling failover like switches and rovers that then need disk space for at least two separate A/B flash slots and badblocks and a separate /root quota. ("support configuring host to retain more than two deployments" https://github.com/coreos/rpm-ostree/issues/577#issuecomment... )

            Theoretically there's a disk space advantage to container layers.

            Native Containers are bare-metal host images as OCI Images which can be stored in OCI Container Registries (or Artifact registries because packages too). GitHub, GitLab, Gitea, GCP, and AWS all host OCI Container/Artifact Registries.

            From https://news.ycombinator.com/item?id=44401634 re bootc-image-builder and Native Containers and ublue-os/image-template, ublue-os/akmods, ublue-os/toolboxes w/ "quadlets and systemd" (and tini is already built-in to Docker and Podman) though ublue/bazzite has too many patches for a robot:

            > ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/

            SBOM tools can scan hosts, VMs, and containers to identify software versions and licenses for citation and attribution. (CC-BY-SA requires Attribution if the derivative work is distributed. AGPL applies to hosted but not necessarily distributed derivative works. There's choosealicense.com , which has a table of open source license requirements in an Appendix: https://choosealicense.com/appendix/ )

            BibTeX doesn't support schema.org/SoftwareApplication or subproperties of schema:identifier for e.g. the DOI URN of the primary schema.org/ScholarlyArticle and it's :funder(s).

            ...

            ROS on devices, ROS in development and simulation environments;

            Conda-forge and RoboStack host ROS Robot Operating System as conda packages.

            RoboStack/ros-noetic is ROS as conda packages: https://github.com/RoboStack/ros-noetic

            gz-sim is the new version of gazebosim, a simulator for ROS development: https://github.com/conda-forge/gz-sim-feedstock

            From https://news.ycombinator.com/item?id=44372666 :

            > mujoco_menagerie has Mujoco MJCF XML models of various robots.

            Mujoco ROS-compatibility: https://github.com/google-deepmind/mujoco/discussions/990

            Moveit2: https://github.com/moveit/moveit2 :

            > Combine Gazebo, ROS Control, and MoveIt for a powerful robotics development platform.

            RoboStack has moveit2 as conda packages with clearly-indicated patches for Lin/Mac/Win: ros-noetic-moveit-ros-visualization.patch: https://github.com/RoboStack/ros-noetic/blob/main/patch/ros-...

            ...

            Devcontainer.json has been helpful for switching between projects lately.

            devcontainer.json can reference a local container/image:name or a path to a ../Dockerfile. I personally prefer to build a named image with a Makefile, though vscode Remote Containers (devcontainers extension) can build from a Dockerfile and, if the devcontainer build succeeds, start code-server in the devcontainer and restart vscode as a client of the code-server running in the container so that all of the tools for developing the software can be reproducibly installed in a container isolated from the host system.

            It looks like it's bootc or bootc-image-builder for building native container images?

            bootc-image-builder: https://github.com/osbuild/bootc-image-builder

        • petre 3 hours ago

          Except when you need a different version of postgresql than the one packaged by Ubuntu, you have to use ppas or something even more horrific: snaps. Otherwise you just put your stuff in a container, problem solved. I'm not for containerizing everything, it just adds useless complexity. But it's a useful tool.

          • palata 2 hours ago

            I guess my point was that if one is serious about shipping a system on a robot, one doesn't use Ubuntu :-).

            But sure, it's easier to throw everything in a container and that's why people do it.

      • yjftsjthsd-h a day ago

        > Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.

        Did docker+systemd get fixed at some point? I would be surprised to hear that it was popular given the hoops you had to jump through last time I looked at it

        • mikepurvis a day ago

          It's only really fixed in podman, with the special `--systemd=always` flag. Docker afaik still requires manually disabling certain services that will conflict with the host and then running the whole thing as privileged— basically, a mess.

      • a day ago
        [deleted]
      • a day ago
        [deleted]
      • sho_hn a day ago

        tmux?! Please share your war stories.

        • mikepurvis a day ago

          Not my favoured approach, but for early stage systems where proper off-board observability/alerting is not yet in place, tmux can function as a kind of ssh-accessible dashboard displaying the stdout of key running processes, and also allowing some measure of inline recovery— like if a process has crashed, you can up-arrow and relaunch it in the same environment it crashed out of.

          Obviously not an approach that scales, but I think it can also work decently well as a dev environment, where you want to run "stock" for most of the components in the system, and just be syncing in an updated workspace and restarting the one bit being actively developed on. Being able to do this without having to reason about a whole tree of interlinked startup units or whatever does lower the barrier to entry somewhat.

          • PhilipRoman 18 hours ago

            One advantage is that if the process has some sort of console on it's stdin, you can do admin work easily. With init systems you now have to configure named pipes, worry about them blocking, have output in separate place, etc.

    • simonw a day ago

      I've used several hosting providers that charge by the container - Fly.io and Render and Google Cloud Run.

      I often find myself wanting to run more than one process in s container for pricing reasons.

    • a day ago
      [deleted]
  • Ericson2314 a day ago

    If I my plug my friend and colleague's work, https://nixos.org/manual/nixos/unstable/#modular-services has just landed in Nixpkgs.

    This will be a game changer for porting to NixOS to new init systems, and even new kernels.

    So, it's good time to be experimenting with things like Nitro here!

  • pdmccormick 5 hours ago

    I love projects like these. They touch upon so many low level aspects of Unix userlands. I appreciate how systemd ventured beyond classical SysV and POSIX, and explored how Linux kernel specific functionality could be put to good use. But I also hope that it is not the last word, and that new ideas and innovations in this space can be further explored.

    Recently I implemented a manufacturing-time device provisioning process that consisted of a Linux kernel (with the efistub), netbooted directly from the UEFI firmware, with a compiled-in bundled initramfs with a single init binary written in Go as the entire userland. It's very freeing when the entire operating environment consists of code you import from packages and directly program in your high level language of choice, as opposed to interacting with the system through subprocesses and myriad whacky and wonderfully different text configuration files.

  • stock_toaster a day ago

    It will be interesting to compare this to dinit[1], which is used by chimera-linux.

    Giving the readme a brief scan, it doesn't look like it currently handles service dependencies?

    [1]: https://github.com/davmac314/dinit

    • nine_k a day ago

      Nitro does not declaratively handle service dependencies, you cannot get a neat graph of them in one command.

      You can still request other services to start in your setup script, and expect nitro to wait and retry starting your service when the dependent service is running. To get a nice graph, you can write a simple script using grep. OTOH it's easy to forget to require the shutdown of the dependent services when your service goes down, and there's no way to discover it using a nitro utility.

    • hippospark 17 hours ago

      I used dinit in Artix Linux. It is lightweight and impressive (https://artixlinux.org/faq.php)

  • usr1106 12 hours ago

    I wrote my own init system in C from scratch some 13 years ago. It was more work than anticipated by myself and the manager who approved it. It served the purpose to bring up a Linux GUI and some backend for it on not so capable hardware in n seconds (don't remember n, but it was impressive).

    It was a nice programming exercise. Wouldn't be suprised if even back then something like that already existed and the whole effort just demonstrated a lack of insight of what is readily available.

    Probably the code still exists on some backup I should not have. Have not looked back and don't know... The company who owned the rights has gone out of business.

    Edit: After typing this it came to my mind a colleague of mine wrote yet another init in the same company. Mine had no dependencies except libc and not many features. The new one was built around libevent, probably a bit more advanced.

  • runako a day ago

    The name & function overlap with AWS Nitro is severe:

    https://docs.aws.amazon.com/whitepapers/latest/security-desi...

    • LeFantome 15 hours ago

      Name yes but an init system and a hypervisor are pretty different.

    • stonogo 21 hours ago

      I don't foresee any problems. One is an init system anyone can use and the other one is an internal corporate KVM fork nobody else cares about.

  • Flux159 a day ago

    How does this compare to s6? I recently used it to setup an init system in docker containers & was wondering if nitro would be a good alternative (there's a lot of files I had to setup via s6-overlay that wasn't as intuitive as I would've hoped).

    • nine_k a day ago

      S6 is way more complex and rich. Nitro or runit would be simpler alternatives; maybe even https://github.com/krallin/tini.

      • Flux159 a day ago

        Thanks! Reading some of your other comments, it seems like runit or nitro may not have been a good choice for my usecase? (I'm using dependencies between services so there is a specific order enforced & also logging for 3 different services as well).

        You seem to know quite a bit about init systems - for containers in particular do you have some heuristics on which init system would work best for specific usecases?

        • ItsHarper a day ago

          dinit is another one with service dependency support

  • lrvick a day ago

    At Distrust, we wrote a dead simple init system in rust that is used by a few clients in production with security critical enclave use cases.

    <500 lines and uses only the rust standard library to make auditing easy.

    https://git.distrust.co/public/nit

    • nine_k a day ago

      Likely neat (33% larger than nit), but the readme only explains how to build it, not its interface or functioning.

      • lrvick a day ago

        Yeah we only recently broke it out as a standalone repo/binary, as everyone historically vendored it, so docs will get love soon, but it will be part of the next stagex release built and signed by multiple parties deterministically as stagex/user-nit.

        To run it all your need to know is put it in your filesystem as "/init" and then add this to your kernel command line for the binary you want nit to pivot to after bringing the system up:

        nit.target=/path/to/binary

        That's it. Minimum viable init for single application appliance/embedded linux use cases.

        nit and your target binary are the only things you actually need to have in your CPIO root filesystem. Can be empty otherwise.

        • nine_k a day ago

          So it's basically like tini (keep a single executable running), but in Rust?

          • lrvick a day ago

            Yep. The less C code in production the better.

            • lrvick 20 hours ago

              Of note, it also will handle platform specific bring-up system calls, basic filesystem setup, etc, so it is much more suitable for embedded, server, or enclave use cases than tini, imo. It is mostly used for Nitro enclaves today.

  • awestroke 16 hours ago

    An init system without the ability to specify dependencies? Without user/group configuration? Ordering must be manually configured? No parallel service launching? No resource management?

    Please don't call this an init systern. It's a barebones process supervisor.

    • beagle3 10 hours ago

      It actually does all these things. Quite well, even - in my experience better than systemd.

      I didn’t use nitro, I’ve been using daemontools (which nitro is an evolution of) for decades. Incredibly easy to use, incredibly stable, understand, and control.

      There is no well defined way to do dependencies (what if your dependency dies 3 seconds into the process? There are many right answers). The djb/daemontools way is just “it’s your problem. But here are the reliable simple cheap tools to start, stop and monitor your dependencies”.

      • vhantz an hour ago

        What makes it better than systemd for you?

  • networked 11 hours ago

    GitHub repository with the issue tracker: https://github.com/leahneukirchen/nitro.

  • garganzol a day ago

    I am interested in using it as a process supervisor in server docker containers. It is clear that it can be compiled from sources, but something like vuxu.org/nitro/install.sh would be super helpful.

  • zoobab 4 hours ago

    Does it require Pid1?

  • GuinansEyebrows a day ago

    love to see new init projects. how does it stack up against runit (the last one i really familiarized myself with on void linux)?

    • kragen a day ago

      She credits runit and daemontools as inspiration, and it looks extremely similar. I hope that at some point she writes a comparison explaining what Nitro does differently from runit and why.

      • cbzbc a day ago

        runit doesn't propagate SIGTERM to services it starts.

        • kragen a day ago

          Hmm, is that desirable? If someone's going around sending SIGTERM to random processes they might also send SIGKILL, and there's no way Nitro can propagate SIGKILL to processes it starts.

          • cbzbc 15 hours ago

            It does, because SIGTERM is traditionally understood as the trigger for a shutdown. Docker - for instance - will send a SIGTERM to pid 1 when a container is stopped - which goes back to a previous comment here about using a real init as pid 1 if the thing in your container forks: https://news.ycombinator.com/item?id=44990092

            • kragen 7 hours ago

              Interesting! I didn't know that—I thought that when you told sysvinit to change its runlevel you normally used some slightly richer interface than signals.

        • atahanacar 20 hours ago

          It does if you use SIGHUP.

  • a day ago
    [deleted]
  • CartwheelLinux a day ago

    Bring back the init wars! /S

    Username relevant...

    I got into Linux right before the init wars, and while they were hectic times they brought a lot of attention, discussions, and options to Linux.

  • prettyman a day ago

    [dead]

  • unit149 a day ago

    [dead]

  • axlee a day ago

    I'd recommend changing names, nitro is already a semi-popular server engine for node.js https://nitro.build/

    • nine_k a day ago

      Any well-known generic word is very likely to already have been used by a bunch of projects, some of them already prominent. By now, the best project name is a pronounceable but unique string, for ease of search engine use. Ironically, "systemd" is a good name in this regard, as are "runit" or even "s6".

      • lrvick a day ago

        I use tiny init systems regularly in AWS Nitro Enclaves. Having the enclave and init system both named nitro is not ideal.

        • nine_k a day ago

          Dinit, runit, tini -- all avoid the name clash :)

      • Y_Y a day ago

        > Any well-known generic word is very likely to already have been used by a bunch of projects,

        Are you sure? There are lots of words, and not so many projects that use words like these as their names.

        Of the 118179 packages I see on this Ubuntu 18.04 system I can roughly roughly ask how many have names that are dictionary (wamerican) words:

          comm -12 <(apt-cache dumpavail | awk -F': ' '/^Package:/{sub(/^lib/,"",$2); print $2}') /usr/share/dict/words | wc -l
        
        This gives 820 (or about 1000 if you allow uppercase). Not so scientific, but I think a reasonable starting point.
      • entropie a day ago

        nitronit obviuously

    • mperham 14 hours ago

      I think I would have gone with nitr0.

    • petre 3 hours ago

      One of the hard things in computer science: naming things.