> UEFI fixes that to some extent, but it’s a pain to maintain the UEFI entries manually and change them every time the kernel updates.
… you don't have to update the UEFI entries every time the kernel updates. (I guess you might if you do like a kernel w/ CONFIG_EFI_STUB, and you place the new kernel under a different filename than what the UEFI boot entry point to then you might … but I was under the impression that that'd be kind of an unusual setup, and I thought most of us booting w/ EFI were doing so with Grub.)
I've done a lot of headless/diskless stuff. I haven't done much for years, because my NAS only has gigabit Ethernet ports. I can cascade them and get four Gbps downstream, but it's still painful.
I have recently upgraded my house to 10Gbps Ethernet, with only one room still stuck at gigabit, and unfortunately, it's my main office. I'm working on getting the drop there now (literally, just taking a break here).
Even once I'm done, accessing an iSCSI drive over 10GbE will be 4-8 times slower than a local NVMe drive, but it will sure be a lot better than it was!
Ideally, I could run VMs on the NAS and have great performance, but that's another hardware upgrade...
Nice. I'm extra fond of ZFS backed network root filesystem, because it lets you put an OS on ZFS without needing to deal with ZFS support in that OS. (One of these days I want to try OpenBSD with its root on NFS on ZFS, either from Linux or FreeBSD.)
Pretty cool! You could also boot into an ephemeral minimal initrd that displays a selection menu instead of doing it in iPXE. That would grab the new kernel and initrd from the network and kexecs it without reboot.
You might find it worth upgrading to 10gbps if you continue to go down this road. The Mikrotik CRS-309 has served me well, and a couple Intel X520-DA2s. I believe those NICs can do iSCSI natively, and pass the session to the operating system with iBFT.
SFP28 might be cheap enough now too, I'm not sure...
I used similar ipxe setup for robotic cluster - every robot booted from the same thing, then kubernetes managed the containe orchestration. it was fun.
You can download the rootfs, extract it to a ramdisk, and just run in memory. This is fast for everything. Unfortunately, memory just got super expensive. Fortunately, Linux requires ~no memory to do many useful things.
> UEFI fixes that to some extent, but it’s a pain to maintain the UEFI entries manually and change them every time the kernel updates.
… you don't have to update the UEFI entries every time the kernel updates. (I guess you might if you do like a kernel w/ CONFIG_EFI_STUB, and you place the new kernel under a different filename than what the UEFI boot entry point to then you might … but I was under the impression that that'd be kind of an unusual setup, and I thought most of us booting w/ EFI were doing so with Grub.)
Even if you do CONFIG_EFI_STUB, there should be a post-update hook to automatically call efibootmgr.
I've done a lot of headless/diskless stuff. I haven't done much for years, because my NAS only has gigabit Ethernet ports. I can cascade them and get four Gbps downstream, but it's still painful.
I have recently upgraded my house to 10Gbps Ethernet, with only one room still stuck at gigabit, and unfortunately, it's my main office. I'm working on getting the drop there now (literally, just taking a break here).
Even once I'm done, accessing an iSCSI drive over 10GbE will be 4-8 times slower than a local NVMe drive, but it will sure be a lot better than it was!
Ideally, I could run VMs on the NAS and have great performance, but that's another hardware upgrade...
something worth mentioning here is that iSCSI is quite unhappy on congested networks or packet loss caused by incast traffic.
to make this actually work well, consider modifying your switches QoS settings to carve out a priority VLAN for iSCSI traffic
or a north-south/east-west architecture, so there's an entirely separate network just for iSCSI. Control plane vs data plane.
Nice. I'm extra fond of ZFS backed network root filesystem, because it lets you put an OS on ZFS without needing to deal with ZFS support in that OS. (One of these days I want to try OpenBSD with its root on NFS on ZFS, either from Linux or FreeBSD.)
Does anyone have an opinion on iSCSI vs NBD?
Pretty cool! You could also boot into an ephemeral minimal initrd that displays a selection menu instead of doing it in iPXE. That would grab the new kernel and initrd from the network and kexecs it without reboot.
You might find it worth upgrading to 10gbps if you continue to go down this road. The Mikrotik CRS-309 has served me well, and a couple Intel X520-DA2s. I believe those NICs can do iSCSI natively, and pass the session to the operating system with iBFT.
SFP28 might be cheap enough now too, I'm not sure...
I used similar ipxe setup for robotic cluster - every robot booted from the same thing, then kubernetes managed the containe orchestration. it was fun.
NFS diskless is the more common approach I've used but this is very cool.
When I tried root-on-nfs I had a lot of issues. The Redhat and Arch package managers don't seem to like it (presumably a sqlite thing?).
You can download the rootfs, extract it to a ramdisk, and just run in memory. This is fast for everything. Unfortunately, memory just got super expensive. Fortunately, Linux requires ~no memory to do many useful things.
I would probably recommend to look into NVMe over TCP over iSCSI, especially for fast NVMe drives.