81 comments

  • NikolaNovak 3 hours ago

    I appreciate the message of this article. I've played with half a dozen types of home NAS / RAID / storage solutions over the decades.

    The best way I can describe it is:

    There are people who just want to use a car to get from A to B; there are those who enjoy the act of driving, maybe take it to the track on a lapping day; and there are those who enjoy having a shell of a car in the garage and working on it. There's of course a definite overlap and Venn diagram :-).

    My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.

    I will never resent the time (oh God so much time!) I've spent in the past mucking with homelabs and storage systems. Good memories and tons of learning! today I have family and kids and just need my storage to work. I'm in a different Venn circle than the author - sure I have knowledge and experience and could conceivably save a few bucks (eh not as given as articles make it seem;), as long as I value my time appropriately low and don't mind the necessary upkeep and potential scheduled and unscheduled "maintenance windows" to my non-techie users.

    But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).

    The trick with old computers harnessed as NAS is the often increased space, power, and setup/patching/maintenance work requirements, compared to hopefully some learning experience and a sense of control.

    • embedding-shape 3 hours ago

      > But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).

      You know, I thought I was too, so I threw in the towel and migrated one my NAS to TrueNAS, since it's supposed to be one of those "turn-key solutions that doesn't require maintenance", and everything got slower, harder to maintain and even managed to somehow screw up one of my old disks when I added it to my pool.

      The next step after that was to migrate to NixOS and bit the bullet to ensure the stuff actually works. I'd love to just give someone money and not having to care, but it seems the motto of "If you want something done correctly, you have to do it yourself" lives deep in me, and I just cannot stomach loosing the data on my NAS, so it ends up really hard to trust any of those paid-for solutions when they're so crap.

      • Uvix 2 hours ago

        I wouldn't call TrueNAS, or anything where you're installing an OS on custom hardware, "turn-key". That's saved for the Synologys and UGREENs and Ubiquitis of the world.

      • tylerflick 2 hours ago

        Are people doing more than serving SMB shares with their NAS’s? I feel like I’m missing out on something.

        • WXLCKNO 37 minutes ago

          I'm running Truenas Scale on my old i7 3770 with 16GB DDR3.

          Obviously got a bunch of datasets just for storage, one for time machine backups over the network and then dedicated ones for apps.

          I'm using for almost all my self hosted apps.

          Home Assistant, Plex, Calibre, Immich, Paperless NGX, Code Server, Pi-Hole, Syncthing and a few others.

          I've got Tailscale on it and I'm using a convenience package called caddy-reverse-proxy-cloudflare to make my apps available on subdomains of my personal domain (which is on CloudFlare ) by just adding labels to the docker containers.

          And since I'm putting the Tailscale address as the DNS entry on CloudFlare, they can only be accessed by my devices when they're connected to Tailscale.

          I think at this point what's amazing is the ease with which I can deploy new apps if I need something or want to try something.

          I can have Claude whip up a docker compose and deploy it with Dockge.

        • hengheng 2 hours ago

          Depending on how you build it, you could run homeassistant next to your smb, which lends itself to all sorts of add-ons such as calibre-web for displaying eBooks and synchronizing progress.

          Of course, gitea and surroundings, or similar ci/cd can be a fun thing to dabble with if you aren't totally over that from work.

          Another fun idea is to run the rapidly developing immich as a photo storage solution. But in general, the best inspiration is the awesome-selfhosted list.

        • maxvu an hour ago

          Running a home server seems relatively popular for all kinds of things. Search term "homelab" brings up a culture of people who seem largely IT-adjacent, prefer retired DC equipment, experiment with network configurations as a means of professional development and insist on running everything in VMs. Search term "self-hosted", on the other hand, seems to skew towards an enterprise of saturating a Raspberry Pi's CPU with half-hearted and unmaintained Python clones of popular SaaS products. In my experience — with both hardware and software vendoring — there is a bounty of reasonable options somewhere in between the two.

        • spiffytech 2 hours ago

          People want all kinds of things besides literal SMB shares:

          - Other network protocols (NFS, ftp, sftp, S3)

          - Apps that need bulk storage (e.g., Plex, Immich)

          - Syncthing node

          - SSH support (for some backup tools, for rsync, etc)

          - You're already running a tiny Linux box in your home, so maybe also Pihole / VPN server / host your blog?

          You've got compute attached to storage, and people find lots of ways to use that. Synology even has an app store.

        • SubiculumCode 2 hours ago

          I personally don't get what they are serving with a home NAS? Movies/Music/Family Photos is all I can think of, personally...and those don't seem that compelling to me compared to cloud.

        • Uvix 2 hours ago

          I'm hosting a couple of apps in Docker on mine. (Pihole, Jellyfin, Audiobookshelf, and Bitwarden.)

        • ErroneousBosh an hour ago

          I run NFS and Postgres to enable multiple-machine video editing.

      • imiric 7 minutes ago

        It's curious that you would choose NixOS for a system that "just works". As much as I like the core ideas of Nix(OS)—reproducibility, declarative configuration, snapshots and atomic upgrades/rollbacks—, having used it for a few years on several machines, I've found it to be opposite of that. It often requires manual intervention before an upgrade, since packages are frequently renamed and API changes are common. The Nix store caches a lot of data, which is good, but it also requires frequent garbage collection to recover space. The errors when something goes wrong are cryptic, and troubleshooting is an exercise in frustration. The documentation is some variation of confusing, sparse, outdated, or nonexistent. I'm sure that to a Nix veteran these might not be issues, but even after a few years of usage, I find it as hostile and impractical to use as on the first day. Using it for a server would be unthinkable for me.

        For my personal NAS machine, I've used a Debian server with SnapRAID and mergerfs for nearly a decade now, using a combination of old and new HDDs. Debian is rock-solid, and I've gone through a couple of major version upgrades without issues. This setup is flexible, robust, easy/cheap to expand, and requires practically zero maintenance. I could automate the SnapRAID sync and "scrub", but I like doing it manually. Best of all, it's conceptually and technically simple to understand, and doesn't rely on black magic at the filesystem level. All my drives are encrypted with LUKS and use standard ext4. SnapRAID is great, since if one data drive fails, I don't lose access to the entire array. I've yet to experience a drive failure, though, so I haven't actually tested that in practice.

        So I would recommend this approach if you want something simple, mostly maintenance-free, while remaining fully in control.

    • master_crab 2 hours ago

      Another way to put it - My home lab has production and non-production environments.

      Non-production is my kubernetes cluster running all the various websites, AI workflows, and other cool tools i love playing with.

      Production is everything in between my wife typing in google.com and google; or between my kids and their favorite shows on Jellyfin.

      You can guess which one has the managed solutions, and which one has my admittedly-reliable-but-still-requires-technical-expertise-to-fix-when-down unmanaged solutions.

    • kotaKat 3 hours ago

      > My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.

      Similarly what I was once told when looking at private planes was "What's your mission?" and they've stuck with me ever since, even if I'm never gonna buy a plane.

      One person's mission might be backing up their family photos while someone else's mission is a full *arr stack.

    • pessimizer 2 hours ago

      I'm an not the relentless explorer and experimenter that you're sort of patronizing with this comment. I'm somebody who knows that you can put together a NAS with an old desktop somebody will give you for free, slap Debian Stable on it, RAID5 (4 or fewer) or RAID6 (5 or a few more) a bunch of drives together, and throw a samba share on the network in less than a day (minus drive clearing time for encryption.)

      It is not some sort of learning and growing experience. The entirety of the maintenance on the first one I put together somewhere between 10-15 years ago is to apt-get update and dist-upgrade on it periodically, upgrade the OS to the latest stable whenever I get around to it, and when I log in and get a message that a disk is failing or failed, shut it down until I can buy a replacement. This happens once every 4 or 5 years.

      The trick with big-name NAS is that they go out of business, change their terms, or install spyware on your computer and you end up involved in tons of drama over your own data. This guide is even a bit overblown. Just use MDADM.* It will always be there, it will always work, you can switch OSes or move the drives to another system and the new one will instantly understand your drives - they really become independent of the computer altogether. When it comes to encryption, all of the above goes for LUKS through cryptsetup. The box is really just a dumb box that serves shares, it's the drives that are smart.

      I guess MDADM is a (short) learning experience, but it's not one that expires. LUKS through cryptsetup is also very little to learn (remember to write zeros to the drive after encrypting it), but it's something that turnkey solutions are likely to ignore, screw up, or lock you into something proprietary through. Instead of getting a big SSD for a boot drive, just use one of those tiny PCIe cards, as small and cheap as you can get it. If it dies, just buy another one, slap it in, install Debian, and you'll be running again in an hour.

      With all this I'm not talking about a "homelab" or any sort of social club, just a computer that serves storage. The choice isn't between making it into a lifestyle/personality or subscribing to the managed experience. Somehow people always seem to make it into that.

      tl;dr: use any old desktop, just use Debian Stable, MDADM, and cryptsetup. Put the OS on a 64G PCIe or even a thumb drive (whatever you have laying around.)

      * Please don't use ZFS, you don't need it and you don't understand it (if you do, ignore me), if somebody tells you your NAS needs 64G of RAM they are insane. All it's going to do is turn you into somebody who says that putting together a NAS is too hard and too expensive.

  • jcalvinowens 16 minutes ago

    I've been running homebuilt NAS for a decade. My advice is going to irritate the purists:

    * Don't use raid5. Use btrfs-raid1 or use mdraid10 with >=2 far-copies.

    * Don't use raid6. Use btrfs-raid1c3 or use mdraid10 with >=3 far-copies.

    * Don't use ZFS on Linux. If you really want ZFS, run FreeBSD.

    The multiple copy formats outperform the parity formats on reads by a healthy margin, both in btrfs and in mdraid. They're also remarkably quieter in operation and when scrubbing, night and day, which matters to me since mine sits in a corner of my living room. When I switched from raid6 to 3-far-copy-mdraid10, the performance boost was nice, but I was completely flabbergasted by the difference in the noise level during scrubs.

    Yes, they're a bit less space efficient, but modern storage is so cheap it doesn't matter, I only store about 10TB of data on it.

    I use btrfs: it's the most actively tested and developed filesystem in Linux today, by a very wide margin. The "best" filesystem is the one which is the most widely tested and developed, IMHO. If btrfs pissed in your cheerios ten years ago and you can't figure out how to get over it, use ext4 with metadata_csum enabled, I guess.

    I use external USB enclosures, which is something a lot of people will say not to do. I've managed to get away with it for a long time, but btrfs is catching some extremely rare corruption on my current NAS, I suspect it's a firmware bug somehow corrupting USB3 transfer data but I haven't gotten to the bottom of it yet: https://lore.kernel.org/linux-btrfs/20251111170142.635908-1-...

    • vardump 9 minutes ago

      I have had zero issues running ZFS on Linux for the last 10 years. (Not saying there were no issues that have annoyed or even caused data loss.)

  • neogodless 3 hours ago

    I've self-hosted web apps (typically IIS and SQL Server) for over 20 years.

    While using desktops for this has sometimes been nice, the big things I want out of a server are

    - low power usage when running 24/7

    - reliable operation

    - quiet operation

    - performance but they don't need much

    So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS. They check all the boxes for server / NAS. Plus both were under $300 before upgrades / storage drives.

    Often my previous gaming desktop sells for a lot more than that ... I just sold my 4 year old video card for $220. Not sure what the rest of the machine will be used for, but it's not a good server because the 12-core CPU simply isn't power efficient enough.

    • hexbin010 3 hours ago

      The UGREEN NAS OS doesn't do encryption right?

      • neogodless 3 hours ago

        Well that isn't on my checklist!

        https://www.reddit.com/r/UgreenNASync/comments/1nr2j39/encry...

        It's possible because you can install a different OS, TrueNAS, etc. but it's not something I personally worry about.

        • kotaKat 3 hours ago

          As a DXP2800 owner with TrueNAS: TrueNAS is so nice on the 2800 for my needs.

          It's even relatively straightforward: start it up with a keyboard and video attached, enter the BIOS, and turn off the watchdog settings. I'd also recommend turning off the onboard eMMC altogether for the following FYI.

          Just FYI: If you blow away the UGREEN OS off the eMMC, restoring it requires opening a support ticket with them, and it's some weird dance to restore it because apparently they've locked down their 'custom' Debian just enough for 'their' hardware.

          As per someone on a Facebook group, "you CANNOT share the file as their system logs once you restore your device and flags it as used. It will fail the hardware test if the firmware has been installed again".

          • whitehexagon 3 hours ago

            Thanks, I've been tempted, but wasnt sure if they work 'local only' and without app, and this sounds like it dials home? Anyway seems like a long wait list for suitable HDD will save my money for now. Plus I was a little more tempted by their Arm offering.

            • kotaKat 3 hours ago

              Ah, no -- the "watchdog" here is basically a system hardware watchdog. The OS 'feeds' the watchdog in the BIOS every X amount of time, if the dog isn't 'fed' in Y time, the computer will fully reboot itself (assuming it crashed).

              Because I've installed something that can't feed the watchdog, I just turn the watchdog off.

              Their OS install crap, I assume they're just trying to make sure that you can't try to put it on your own hardware (sort of like how people pirate Synology DiskStation).

    • newsclues 3 hours ago

      Find a cheap low power CPU to swap in. Or tune it in BIOS to use less power (some CPUs have an eco mode that make this easy).

      Sell the gaming GPU and put in something that does video out, or use a CPU with an iGPU.

      Big gaming cases with quiet fans are quiet.

      Selling the GPU and tuning or swapping the CPU can put money in your pocket to pay for storage.

      • neogodless 3 hours ago

        It is water-cooled and whisper quiet aside from the GPU fans. So yes there are options.. but right now selling the RAM alone might pay for a whole mini-server. I'm going to try to sell it locally to a PC gamer though, get some proper use out of it!

      • Kirby64 3 hours ago

        This is literally impossible with most server grade stuff. It’ll never be as efficient as the low power modern stuff.

  • Havoc 3 hours ago

    Reusing existing hardware is a great gameplan. Really happy with my build and glad I didn't go for out of the box.

    >In general, you want to get the fastest boot drive you can.

    Pretty much all NAS like operation systems run in memory, so in general you're better off running the OS from some shitty 128gb sata ssd and using the nvme for data/cache/similar where it actually matters. Some OS are even happy to use a usb stick but that only works for OS designed to accommodate this (unraid I think does). Something like proxmox would destroy the stick.

    Also, on HDDs - worth reading up on SMR drives before buying. And these days considering an all flash build if you don't have TBs of content

    • embedding-shape 3 hours ago

      > Something like proxmox would destroy the stick.

      Never used proxmox myself, but is that the common issue of "logs written to flash consuming writes"? Or something else? The former is probably just changing a line in the config to fix, if it's just that.

      > And these days considering an all flash build if you don't have TBs of content

      Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content? My own personal NAS currently has 16TB in total, I don't want to even imagine what the cost of that would be if I went with SSDs instead of HDDs. I still have SSD for caching, but main data store in a NAS should most likely be HDDs unless you have so much money you just have to spend it.

      • gessha an hour ago

        There’s enough people out there(not me) that there’s a market for all-SSD NASes.

  • iammjm 3 hours ago

    “I repurposed an old gaming PC with a Ryzen 1600x, 24GB of RAM, and an old GTX 1060 for my NAS since I had most of the parts already.”

    Wouldn’t running something like this 24/7 cause a substantial energy consumption? Costs of electricity being one thing, carbon footprint an another. Do we really want such a setup running in each household in addition to X other devices?

    • rainsford 2 hours ago

      In addition to energy, the biggest reason I no longer use old desktops as servers is the space they take up. If you live in an apartment or condo and don't have extra rooms, even having a desktop tower sitting in a corner somewhere is a lot less visually appealing than a small NSA or mini-PC you can stick on a shelf somewhere.

    • zahlman 2 hours ago

      Sheesh. The described "old gaming PC" is much more powerful than the machine I'm using to post this.

    • dijit 3 hours ago

      sure, but the co2 emissions from a new machine would take about 10 years to offset, by which time this thinking has made you replace it.. twice.

      • leobg 3 hours ago

        ? It’s not like the machine would be custom built for him.

        Are you saying it’s fine to drive a huge truck if you’re single and just need to get around the block to buy a pack of eggs, just because the emissions are nothing compared to those required for making that smaller, more efficient car that you could buy instead?

        • onionisafruit 2 hours ago

          If your only use for a vehicle is a weekly or even daily trip around the block to buy a pack of eggs, the best environmental choice is to use a vehicle that is already manufactured. If the only vehicle available to you is a semi truck, that’s the best choice. Even over a lifetime of daily trips, the difference in emissions between the semi truck and a golf cart won’t make up for the emissions of manufacturing the golf cart and transporting it to you.

          Of course this is a contrived example that ignores the used vehicle market or the possibility of walking around the block.

        • Groxx 2 hours ago

          no, they're saying the emissions needed to create that smaller, more efficient car may vastly exceed their car's emissions during its entire lifetime under their use. so it may be a net loss.

        • ahtihn 2 hours ago

          The break-even point for the small car vs truck is much lower, so there it makes a lot more sense to switch.

        • alehlopeh an hour ago

          It’s actually fine to do that because people are allowed to make their own choices no matter how much you disagree with them.

    • embedding-shape 3 hours ago

      > Wouldn’t running something like this 24/7 cause a substantial energy consumption?

      Obviously depends on the actual usage, and parent's specific setup, lots of motherboards/CPUs/GPUs/RAM allow you to tune the frequencies and allows you to downclock almost anything. Finally, we have no idea about the energy source in this case, could be they live in a country with lots of wind and solar power, if we're being charitable.

      • teiferer 3 hours ago

        > could be they live in a country with lots of wind and solar power, if we're being charitable.

        Because solar wind and hydro have no impact on the environment at all. Or nuclear.

        I wish people would understand that waste is waste. Even less waste is still waste.

        (I don't argue for fossil fuels here, mind you.)

        Plus, the countries have shared grids. Any kWh you use can't be used by someone else, so may come from coal when they do, for all you know. It's a false rationalization.

        • embedding-shape 3 hours ago

          > Because solar wind and hydro have no impact on the environment at all. Or nuclear.

          > I wish people would understand that waste is waste. Even less waste is still waste.

          So if I have 10 mining rigs connected to the state power grid, what the source of that energy has matters nothing for the environment? If I use a contract that 100% guarantees it comes from solar, it has the same environmental impact as if I use a cheaper contract that guarantees 100% coal power?

          I'm not sure if I misunderstand what you're saying, or you're misunderstanding what I said before, but something along the lines got lost in transmission I think.

          • itishappy an hour ago

            Humanity currently produces 30 TWh, with roughly 60% of that from fossil fuels. You connect 10 mining rigs. There are two options for what happens to the world's power generation:

            1. You affect the mix! Your rigs create new solar and decommission coal plants! The world is cleaner!

            2. You claim a "clean slice" of the existing mix. You feel good because you use only solar, but MRI machines still use power, so their mix is now "dirtier" without changing the actual state of the world.

            In real systems, it's probably a combination of the above. I assume our decisions only meaningfully matter by exerting market pressures over longer timescales.

          • plqbfbv 2 hours ago

            > I repurposed an old gaming PC with a Ryzen 1600x, 24GB of RAM, and an old GTX 1060 for my NAS since I had most of the parts already

            > I wish people would understand that waste is waste

            I think the point is that the configuration from the post can easily run as low as maybe 30-40W on idle, but as high as a couple hundred depending on utilization. An off-the-shelf NAS probably spikes at most in the ~35W range, with idle/spindle-off utilization in the 10W range (I'm using my 4-bay Synology DS920+ as a reference). Normally the biggest contributor to NAS energy usage is the number of HDDs, so the more you add, the more it consumes, but in this configuration the CPU, the RAM, and the GPU are all "oversized" for the NAS purpose.

            While reusing parts for longer helps a lot for carbon footprint of the material itself, running that machine 24/7/365 is definitely more CO2-heavy w.r.t. electricity usage than an off-the-shelf NAS. And additional entropy in the environment in the form of heat is still additional entropy, whether it comes from coal or solar panels.

            • intrasight an hour ago

              I will sell my old desktop as a gaming pc and use the funds to offset the cost of a new NAS.

    • happyweasel an hour ago

      does your gtx 1060 help in any way for the NAS use case?

  • powerclue 3 hours ago

    Better? No absolutely not. Capable? Without a doubt. I have a multi bay nas and it's like 1/6the the size of my pc case. My nas also makes removing and replacing drives trivial. There's a million guides online for my particular nas already and software written with it in mind. It also draws a lot less power than my gaming pc and has a lot quieter operation.

    It's difficult for me to accept it's better given all the above.

  • geor9e 14 minutes ago

    A Synology NAS is very low wattage. In a year, it saves enough electricity to pay for itself, compared to leaving my old PC on 24/7.

  • rini17 3 hours ago

    After I had a reckoning with bitrot, would muchly recommend to use something with ECC memory for NAS. And a checksumming filesystem with periodic scrubing that won't get corrupt on you silently.

    • mjevans an hour ago

      Same, but I also discovered a wonderful bonus in the difference between True ECC DDR5 and just the on chip BS stuff.

      ECC DDR5 boots insanely fast since the BIOS can quickly verify the tune passes. This is even true when doing your initial adjustment / verification of manufacturer spec.

    • foobarian 2 hours ago

      > checksumming filesystem with periodic scrubing

      Do you know a system that does this? Looking for this too

      • QuiEgo an hour ago

        ZFS, Btrfs, or SnapRAID in a chron job (not a file system, but accomplishes something similar).

        ZFS is the “gold standard” here

      • Palomides 2 hours ago

        btrfs and zfs

  • waswaswas an hour ago

    At San Francisco electricity prices of ~$0.50/kWh, using an old gaming PC/workstation instead of a lower power platform will cost you hundreds of dollars per year in electricity. The cost of an N100-based NAS gets dwarfed by the electricity cost of reusing old hardware.

    • lelele an hour ago

      But do you really need to keep it on 24/7? What about a wake-on-LAN solution?

      • reddalo an hour ago

        You wouldn't want to consume your hard disk spin-up count every day.

  • MisterTea 2 hours ago

    I've been building and running various home servers for years. Currently I have a n eBay special FreeBSD quad xeon (based on the desktop socket) with 64GB ECC and a cheap SAS/SATA card running two ZFS arrays.

    On a side note: I hate web GUI's. I used to think they were the best thing since sliced bread but the constant churn combined with endless menus and config options with zero hints or direct help links led me to hate them. The best part is the documentation is always a version or two behind and doesn't match the latest and greatest furniture arrangement. Maybe that has improved but I'd rather understand the tools themselves.

  • nfriedly 2 hours ago

    My home server / NAS is essentially just my old gaming desktop + some extra hard drives. It runs Unraid with Nextcloud, Plex, and a few other services. It's great, and generally pretty low maintenance.

    I'll also point out that there are a lot of folks out there who don't have very large demands when it comes to computing, and would be served perfectly well by a 5-10 year old system. Even low-end gaming (Fortnight, GTA V, Minecraft, Roblox, etc.) can run perfectly fine on a computer built with $300-400 of used parts.

  • teiferer 3 hours ago

    > old gaming PC with a Ryzen 1600x, 24GB of RAM

    "Old", right. That old PC I'm about to throw away has 2 GB of RAM.

    • sejje 2 hours ago

      I've yet to own a machine with 24G of RAM in my life.

      I've been a computer geek for 30 years.

  • RandomBacon 2 hours ago

    I'm looking for the quietest 6 bay NAS possible.

    I have a beQuiet case and six 30TB HDDs, and I plan to put the Ubuntu with a Plex server on a NVME SSD and do a ZFS 4+2.

    Can anyone point me to a better/quieter set-up? Thank you in advance.

  • kentiko 3 hours ago

    The first thing you should consider doing with you old device is selling or giving them away. This helps lowering the need for manufacturing more hardware, it prevents the hardware becoming e-waste in a drawer, and it put pressure on the market to lower it's prices. Sure, you can reuse as a NAS, but someone probably needs it more.

    • rini17 3 hours ago

      The electronics went so cheap recently, so selling it to strangers is rarely worth the effort. Then there's a question, what OS are you going to put on an old PC. And then even if they are, say, only using browser, and would be okay with linux, modern browsers need 8GB of memory at least.

      • mvx64 an hour ago

        I know I am in the minority and my uses/needs/requirements are not average, but I am perfectly fine with running Xubuntu on the following hardware: 1) 4GB 2011 Thinkpad with HDD (yeah really) and 2) 4GB 2009 Phenom desktop (was Win10 until a month ago).

        By fine I mean running all these at the same time: firefox with several tabs, development tools, Blender and GIMP. All snappy and fast. Even the HDD in the laptop is only an annoyance during/after a cold boot. Then it makes no difference. I daily drive both for the past 8-15 years. The laptop sits at ~10-15W idle and the i5 in it is a workhorse if needed.

        Of course there are uses for better hardware, I am not dismissing upgrades. But the whole modern hw/sw situation is a giant shipwreck and a huge waste of resources/energy. I've tried very expensive new laptops for work (look up "embodied energy"), and Windows 11 right-click takes half a second to respond and Unity3D can take several minutes to boot up. It's really sad.

        edit: To be honest I have to add a counter-example: streaming >=1080p60 video from YT is kind of a no-no, but that's related to the first sentence of my post.

      • catlikesshrimp 2 hours ago

        I am running Win 10 LTSC on "HP 205 G3 All-in-One Desktop PC" with 4GB RAM. Not the best experience, but plays youtube and can output to HDMI.

        I am not saying you are wrong in general.

  • xandrius 2 hours ago

    Sure, if you're going to reuse something which would be thrown away or left to dust otherwise (foolish but I'd imagine someone does that).

    But don't do this just so you can upgrade your current pc.

    I'd vouch more for old laptops, which are generally not upgradeable, come with built-in UPS, if you remove the screen is as thin as a notebook and can handle low usage. Then you can connect either directly or via other interfaces a bunch of disks and you're golden.

  • patja 2 hours ago

    Yes it is a NAS and it is cheap and convenient to repurpose hardware.

    But for anything where your data is important isn't ECC memory still critical for a NAS in this day and age?

    • mjevans an hour ago

      Yes, and my desktops utilize ECC too for that reason. I only lack ECC in the places it's really difficult to avoid that tremendous drawback.

      E.G. a Steamdeck is or smartphone are both relegated as toy devices that are not for serious computing.

  • lateforwork 4 hours ago

    Unfortunately PCs have mechanical devices that give out after a few years. I am referring of course to fans. I use a Raspberry Pi 4 running Ubuntu and Samba as my NAS. It is cheap and reliable.

    • eloisius 3 hours ago

      I do too, but I’m looking to get proper solution soon. A Pi is a pretty lousy NAS. It can’t even power two drives so you can’t have redundancy unless you get a powered USB hub. And even then, I used one of those for a while and the drive connected to it prematurely failed. I think maybe because the power supply wasn’t stable.

      • GlibMonkeyDeath 3 hours ago

        I have a Pi4 running Raid 1 NAS with two SSD drives, and an externally powered USB hub. Unfortunately, it crashes every 6 months or so and needs a power cycle. Haven't been able to track down why, but I also suspect a power supply issue.

        Initially I naively tried to run the two drives right off the USB3 ports in the Pi, and that basically crashed within a day - but that is of course because I was exceeding the power draw. An external hub and supply helped, but didn't fully fix the issue.

    • xandrius 2 hours ago

      I have fans for the early 00s.

      Also a fan is like $10?

      Things which are more vital than that are the disks, power supply, rams.

    • strken 3 hours ago

      I've had more SD cards die on me than fans. I don't think any have died in the past five years, even.

      • gilrain 3 hours ago

        Is there a particular brand you buy? Mine always fail after about 5 years… and I try a new brand each time. Not cheap fans, either; usually $15-20 per 120mm unit.

        • baq 3 hours ago

          Noctua, but $20 might not be enough for the cheapest one depending where you live.

          I’m not buying anything else and I’m also swapping out any non-noctua fan in my parts when possible (e.g. bought a scythe cooler due to ‘interesting’ dimensional constraints and swapped its fan with a noctua one.)

        • nagisa 2 hours ago

          I always buy the cheapest PWM fans available in a nearby store (so usually Arctic) and I never had one fan fail on me in my life.

          They almost never run 100%, though, and I have a recurring task set up to clean dust outta my filters, computers and servers.

        • type0 3 hours ago

          I tried using different sd cards with RPi but kept having issues with broken filesystem few months after, it was probably caused by bad power supply and electric surges.

          • jhgb 3 hours ago

            You don't HAVE to boot RPi4+ from an SD card. RPi4 and RPi5 can boot from an external SSD just fine. I don't recall the last time I used an SD card in an RPi but it must have been years.

      • PaulHoule 3 hours ago

        I can’t remember replacing fans because they stopped spinning but I have EOLed them because the bearings went bad and they started to screech.

      • jhgb 3 hours ago

        Don't use an SD card, then. It's that simple.

  • superkuh 4 hours ago

    While there are use cases for NAS, generally, if you have a desktop PC it's far better to put the hard drives in it rather than setting up a second computer you have to turn on and run too. Putting the storage in the computer where you'll use it means it'll be much faster, much cheaper, incomparably more reliable, with a more natural UI, and it'll use less eletricity than having to run 2 computers.

    Now if your NAS use case is streaming media files to multiple devices (TV set top boxes, etc), sure, NAS makes sense if the NAS you build is very low idle power. But if you just need the storage for actual computing it is a waste of time and money.

    • ubercow13 3 hours ago

      Why do you think it'd be more reliable? That's one of the main advantages of a NAS