36 comments

  • jedberg 2 hours ago

    My NFS story: In my first job, we used NFS to maintain the developer desktops. They were all FreeBSD and remote mounted /usr/local. This worked great! Everyone worked in the office with fast local internet, and it made it easy for us to add or update apps and have everyone magically get it. And when the NFS server had a glitch, our devs could usually just reboot and fix it, or wait a bit. Since they were all systems developers they all understood the problems with NFS and the workarounds.

    What I learned though was that NFS was great until it wasn't. If the server hung, all work stopped.

    When I got to reddit, solving code distribution was one of the first tasks I had to take care of. Steve wanted to use NFS to distribute the app code. He wanted to have all the app servers mount an NFS mount, and then just update the code there and have them all automatically pick up the changes.

    This sounded great in theory, but I told him about all the gotchas. He didn't believe me, so I pulled up a bunch of papers and blog posts, and actually set up a small cluster to show him what happens when the server goes offline, and how none of the app servers could keep running as soon as they had to get anything off disk.

    To his great credit, he trusted me after that when I said something was a bad idea based on my experience. It was an important lesson for me that even with experience, trust must be earned when you work with a new team.

    I set up a system where app servers would pull fresh code on boot and we could also remotely trigger a pull or just push to them, and that system was reddit's deployment tool for about a decade (and it was written in Perl!)

    • zh3 an hour ago

      Don't know about FreeBSD but hard hanging on a mounted filesystem is configurable (if it's essential configure it that way, otherwise don't). To this day I see plenty of code written that hangs forever if a remote resource is unavailable.

  • buserror 2 hours ago

    I use NFS as a keystone of a pretty large multi-million data center application. I run it on a dedicated 100Gb network with 9k frames and it works fantastic. I'm pretty sure it is still use in many, many places because... it works!

    I don't need to "remember NFS", NFS is a big part of my day!

    • zh3 an hour ago

      On a smaller scale, I run multiple PC's in house diskless with NFS root; so easy to just create copies on the server and boot into them as needed, it's almost one image per bloated app these days (server also boots PC's into Windows using iSCSI/SCST and old DOS boxes from 386 onwards with etherboot/samba). Probably a bit biased due to doing a lot of hardware hacking where virtualisation solutions take so much more effect, but got to agree NFS (from V2 through V4) just works.

  • nasretdinov 43 minutes ago

    I have really mixed feelings about things like NFS, remote desktop, etc. The idea of having everything remote to save resources (or for other reasons) does sound really appealing in theory, and, when it works, is truly great. However in practice it's really hard to make these things be worth it, because of latency. E.g. for network block storage and for NFS the performance is usually abysmal compared to even a relatively cheap modern SSD in terms of latency, and many applications now expect a low latency file system, and perform really poorly otherwise.

  • ryandrake 3 hours ago

    NFS is the backbone of my home network servers, including file sharing (books, movies, music), local backups, source code and development, and large volumes of data for hobby projects. I don't know what I'd do without it. Haven't found anything more suitable in 15+ years.

    • INTPenis 2 hours ago

      Same. The latest thing I did was put snes state and save files on NFS so I can resume the same game from laptop, to retropi (tv), and even on the road over wireguard.

  • AshamedCaptain 2 hours ago

    > There is also a site, nfsv4bat.org [...] However, be careful: the site is insecure

    I just find this highly ironic considering this is NFS we are talking about. Also, do they fear their ISPs changing the 40 year old NFS specs on the flight or what ? Why even mention this ?

  • mixmastamyk 3 hours ago

    What are most people using today for file serving? For our little lan sftp seems adequate, since ssh is already running.

    • nine_k an hour ago

      SMB2 for high-performance writable shares, WebDAV for high-performance read-only shares, also firewall-friendly.

      Sftp is useful, but is pretty slow, only good for small amounts and small number of files. (Or maybe i don't know how to cook it properly.)

    • pkulak 2 hours ago

      SMB has always worked great for me.

    • Arubis an hour ago

      NFS! At least on my localnet.

    • Narushia 16 minutes ago

      NFS v4.2. Easy to set up if you don't need authentication. Very good throughput, at least so long as your network gear isn't the bottleneck. I think it's the best choice if your clients are Linux or similar. The only bummer for me is that mounting NFS shares from Android file managers seems to be difficult or impossible (let alone NFSv4).

  • cramcgrab an hour ago

    Zfs includes nfs, its built in and very handy still!

  • sunshine-o an hour ago

    If only I could mount a NFS share Android ...

    • Narushia 34 minutes ago

      I looked into this a while ago and was surprised to find that no file explorer on Android seems to support it[1]. However, I did notice that VLC for Android does support it, though unfortunately only NFSv3. I was at least able to watch some videos from the share with it, but it would be nice to have general access to the share on Android.

      [1] Of course, I didn’t test every single app — there’s a bucketload of them on Google Play and elsewhere…

  • semi-extrinsic 2 hours ago

    I'm considering NFS with RDMA for a handful of CFD workstations + one file server with 25Gbe network. Anyone know if this will perform well? Will be using XFS with some NVME disks as the base FS on the file server.

    • fock an hour ago

      Quite some time ago I implemented NFS for a small HPC-cluster on a 40GBe network. A colleague set up RDMA later, since at start it didn't work with the Ubuntu kernel available. Full nVME on the file server too. While the raw performance using ZFS was kind of underwhelming (mdadm+XFS about 2x faster), network performance was fine I'd argue: serial transfers easily hit ~4GB/s on a single node and 4K-benchmarking with fio was comparable to a good SATA-SSD (IOPS + throughput) on multiple clients in parallel!

  • cramcgrab an hour ago

    Auto home! And jumpstart! Aah, the network is the computer!

  • Eikon an hour ago

    ZeroFS uses NFS/9P instead of fuse!

    https://github.com/Barre/ZeroFS

  • 01HNNWZ0MV43FF 2 hours ago

    I'd seen a proposal to use loopback NFS in place of FUSE:

    https://github.com/xetdata/nfsserve

  • holoduke 3 hours ago

    We are still using it for some pretty large apps. Still have not found a good and simple alternative. I like the simplicity and performance. Scaling is a challenge though.

    • hnlmorg 2 hours ago

      Unfortunately there doesn’t seem to be any decent alternative.

      SMB is a nightmare to set up if your host isn’t running Windows.

      sshfs is actually pretty good but it’s not exactly ubiquitous. Plus it has its own quirks and performs slower. So it really doesn’t feel like an upgrade.

      Everything else I know of is either proprietary, or hard to set up. Or both.

      These days everything has gone more cloud-oriented. Eg Dropbox et al. And I don’t want to sync with a cloud server just to sync between two local machines.

      • toast0 2 hours ago

        > SMB is a nightmare to set up if your host isn’t running Windows.

        Samba runs fine on my FreeBSD host? All my clients are Windows though.

        If I wanted to have a non-windows desktop client, I'd probably use NFS for the same share.

        • hnlmorg 2 hours ago

          It runs fine but it's a nightmare to set up.

          It's one of those tools that, unless you already know what you're doing, you can expect to sink several hours into trying to get the damn thing working correctly.

          It's not the kind of thing you can throw at a junior and expect them to get working in an afternoon.

          Whereas NFS and sshfs "just work". Albeit I will concede that NFSv4 was annoying to get working back when that was new too. But that's, thankfully, a distant memory.

      • jjtheblunt 2 hours ago

        What happened to Transarc's DFS ?

        I looked, found the link below, but it seems to just fizzle out without info.

        https://en.wikipedia.org/wiki/DCE_Distributed_File_System

        Anyway, we used it extensively in the UIUC engineering workstation labs hundreds of computers, 20+ years ago, and it worked excellently. I set up a server farm 20 years ago of Sun sparcs but used NFS for such.

        • convolvatron 2 hours ago

          I used to administer AFS/DFS and braved the forest of platform ifdefs to port it to different unix flavors.

          plusses were security (kerberos), better administrative controls and global file space.

          minuses were generally poor performance, middling small file support and awful large file support. substantial administrative overhead. the wide-area performance was so bad the global namespace thing wasn't really useful.

          I guess it didn't cause as many actual multi-hour outages NFS, but we used it primarily for home/working directories and left the servers alone, whereas the accepted practice at the time was to use NFS for roots and to cross mount everything so that it easily got into a 'help I've fallen and can't get up' situation.

          • jjtheblunt an hour ago

            that's very similar to what we were doing for the engineering workstations (hundreds of hosts across a very fast campus network)

            (off topic, but great username)

      • NexRebular 2 hours ago

        > SMB is a nightmare to set up if your host isn’t running Windows.

        It's very easy on illumos based systems due the integrated SMB/CIFS service.

      • fodkodrasz 2 hours ago

        SMB is not that terrible to set up (has its quirks definitely), but apple devices don't interoperate well in my experience. SMB from my samba server performs very well from linux and windows clients alike, but the performance from mac is terrible.

        NFS support was lacking on windows when I last tried. I used NFS (v3) a lot in the past, but unless in a highly static high trust environment, it was worse to use than SMB (for me). Especially the user-id mapping story is something I'm not sure is solved properly. That was a PITA in the homelab scale, having to set up NIS was really something I didn't like, a road warrior setup didn't work well for me, I quickly abandoned it.

        • hnlmorg 2 hours ago

          > SMB is not that terrible to set up

          Samba can be. Especially when compared with NFS

          > NFS support was lacking on windows when I last tried.

          If you need to connect from Windows then your options are very limited, unfortunately.

      • Spivak an hour ago

        I mean the decent alternative is object storage if you can tolerate not getting a filesystem. You can get an S3 client running anywhere with little trouble. There are lots of really good S3 compatible servers you can self-host. And you don't get the issue of your system locking up because of an unresponsive server.

        I've always thought that NFS makes you choose between two bad alternatives with "stop the world and wait" or "fail in a way that apps are not prepared for."

        • hnlmorg 35 minutes ago

          If you don't need a filesystem, then your options are numerous. The problem is sometimes you do need exactly that.

          I do agree that object storage is a nice option. I wonder if a FUSE-like object storage wrapper would work well here. I've seen mixed results for S3 but for local instances, it might be a different story.

    • q3k 2 hours ago

      9P? Significantly simpler, at the protocol level, than NFS (to the point where you can implement a client/server in your language of choice in one afternoon).

    • rootnod3 3 hours ago

      True. But for example a home server I absolutely love the simplicity. I have 6 Lenovo 720q machines, one of them as a data storage just running simple NFS for quick daily backups before it pushes them to a NAS.