New update: my current setup is a dell power edge t310 with 6x4tb SAS, zeon CPU, and 12gb ECC all parts stock. No hardware raid. 2.5gb network card. Should I just replace the 6 drives? With larger capacities? That will probably be more than $10/tb… I didn’t buy the 16 drives yet, they are used SAS drives 4tb each, turn to be about $40 each.

Current storage 8tb used out of 14… And lots of cold drives waiting to get copied… 10tb+ probably. Is it worth copying all the cold storage drives to the redundant nas.

Update: budget(200-600), the reason for the build is I found cheap 4tb drives for almost $10/Terabyte. So I want to use as much of them as I can

I am trying to build my final NAS build as a beginner.

I have a 6x4tb dell server, but it’s not enough.

I am currently trying to build the final boss of my nasses. 4x16tb with truenas with raid

I am unsure of what parts to buy as I am a complete beginner.

I found a case that can hold all 14 drives.

I need a motherboard, CPU, ram, PSU

I am on a budget, kind of.

What motherboard do you recommend? Pulled from a workstations with CPU and ram? A server board? Normal consumer with normal consumer CPU? Motherboard should have some pcie slots for 2 sata cards and one 2.5 GB card.

What CPU to run all these drives?

What ram and how much? 16? 32? Ecc, non ecc? Ddr4? Ddr3?

Power supply: 850w or more?

All parts should be able to support the 16 drives with headroom…

I would appreciate any help on this build, I want to build this as soon as possible.

Thanks

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I wouldn’t use more than 4 or 6 disks in a home environment. Specially with mechanical drivers, power consumption 24/7 would get me very worried.

    I run 4 x 8Tb SSDs, not cheap, but solid, low power AND low heat (even more important).

    Consider also heat dissipation as most likely at home you don’t have a constant temperature and humidity, so many spinning disks can suffer from heat, and that will kill them faster

    Longevity… With so much space I would expect to keep it running a decade or more… So factor in 10x365x24 hours of operation, energy consumed, heat dissipation and failure rate.

    On top of that, whatever gpu and ram you throw at it is meaningless, whatever wi work, even an Intel n100 NUC. Having enough cables and port instead… Well.

    • Something Burger 🍔@jlai.lu
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      20W/drive means 30x24x0.2 kWh each month for 10 drives. At 0.20€/kWh, that’s 28€/month, cheaper than a 20TB Hetzner box. That’s assuming all drives are always spinning, as an idle drive uses more like 5W.

      • Shimitar@downonthestreet.eu
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        10x4tb = 40tb can be achieved with 4 12tb drives (actually 36tb in raid5) .

        Doubtfully those 12tb uses much more power than the 4tb ones, each. So the 28€/m probably cut down to 14,€/m counted in excess.

        Considering 120m (10y) of uptime, you should save enough to justify cutting down from 10 to 4 drives.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          But going with more smaller drives gives you higher IO and the ability to have more concurrent failures before disaster. Losing a disk during resilvering is horrible when you’re only running with 1 redundant drive normally.

          • Shimitar@downonthestreet.eu
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            Yes, more redindancy is good and indeed worth having. Still 5 12tb drives are probably yet more energy and heat efficient than 10 4tb ones.

            Even if I had 10 4tb for free I wouldn’t use them. Maybe a couple for backup reasons or cold storage, but not active 24/7 for a domestic raid environment.

            I actually have 4 6tb hdds that I dismissed for the 4 8tb sdds, and I use two for local backup and keep two spares to replace them when they will fail.

            4 8tb in raid5 provide 24tb total space that its far more than I need, and the risk of a double failure is mitogated by a proper 3,2,1 backup strategy in place

            As for the higher I/o frankly I never felt the need. 1gbps home network is always the bottleneck anyway and if you require such disk troughput on your network, you are doing something wrong anyway.

            Even many 4k video streams would sturate your lan before saturating your disks unless you store uncompressed video streams.

  • Q@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    That sounds like a nightmare tbh. So many failure points, so much heat and power usage, and cables.

    I have 6 out of 8 bays filled and still feel like it’s a lot to worry about and manage if something fails.

  • Bloefz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Ehhh one thing I’ve learned over the years, it doesn’t matter how much storage I buy. Within a few weeks it’ll be full.

  • Onomatopoeia@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Others have mentioned power - you may want to do some math on drive cost vs power consumption. There’ll be a drive size point that is worth the cost because you’ll use fewer drives which consume less power than more drives.

    Having built a number of systems, I’m a LOT more conscious of power draw today for things that will run 24/7. Like my ancient NAS draws about 15 watts at idle with 5 drives (It will spin down drives).

    More drives will always mean more power, so maybe fewer but larger drives makes sense. You may pay more up front, but monthly power costs never go away.

    Also, I’ve built a 10 drive n NAS like this (because I had the drives and the case, mono and ram). It can produce a lot if heat while doing anything, and it was a significant power hog - like 200w when running. And it really didn’t idle very well (I’ve run it with UnRaid, TruNAS and Proxmox).

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    ABSOLUTELY ECC memory, 32gb or higher if you can afford it these days as TrueNAS does benefit from a decent cache space, especially with so many drives to spread data slices across.

    Realistically unless you expect multiple concurrent users, any 4 core or higher CPU from 2015-on will be plenty of power to manage the array. No need for dedicated server hardware unless the price is right

    I have a Dell PowerEdge t3 SOHO/small business server tower that I gutted and turned into a 5x8tb config. It only has a middling 4 core Xeon 1225v5 and I never get above 50% CPU usage when maxing the drives out. More CPU is needed if you’re doing filesystem compression or need multiple concurrent users.

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I’ve never run into issues running desktop hardware without ECC as servers - since the 90’s.

      I just don’t think the extra cost is worthwhile - I’m not running systems/services that will have catastrophic failures without ECC (or have weird bitflips that would corrupt some transaction).

  • Humanius@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    There is no real clarification what that budget is, so I will assume that the budget is tight.
    My advise is assuming that you are looking for the best bang for the buck.

    The case looks like a good option, assuming that those are 3.5 inch bays.
    It should give you plenty of space for expansion in the future if you want to do that

    RAM prices are pretty nuts right now, so I would definitely not go balls to the wall with 128 GB of RAM.
    16 GB of RAM should be more than plenty for a NAS server. Maybe you can even get away with 8GB?
    I’m using 16 GB of DDR3 RAM in my own NAS server (which is also running Jellyfin and Nextcloud) and it’s running fine.

    Speaking of DDR3… Have you considered buying your CPU, motherboard and RAM second hand?
    From what I hear the prices of DDR3 RAM are not nearly as elevated as those of DDR4 and DDR5 RAM, and DDR3 is plenty sufficient for a simple NAS.

    Be sure not to skimp on the power supply. Most consumer power supplies are not built for running a NAS worth’s of HDDs.
    I’m running a Corsair RM550x in my server, which is capable of supplying 130W on the 5V rail.

    Good luck with your server build!

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I would seek the best price per terabyte while still allowing redundancy.

      • hesh@quokk.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        True, but I would factor in some kind of negative to cost/longevity from increasing number of drives. Even if 16x4 is a bit cheaper than 4x16 today, will it die faster?

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          At these scales, I don’t think it’s measurable, if statistically significant at all.

          In any case, you should always be ready to replace a drive that fails. I buy used because they’re significantly cheaper (or at least they used to be) and I’ve never had any major failures.

          • Onomatopoeia@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 days ago

            And while more drives means more failure opportunity, it also means when a failed drive is replaced, it’s likely of a different manufacture period.

            I have a 5-drive NAS that I’ve been upgrading single drives every 6 months. This has the benefit of slowly increasing capacity while also ensuring drives are of different ages so less likely to fail simultaneously. (Now I’m waiting for prices to come back down, dammit).

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    It’s better to buy 4x 16-20TB drives and expand storage instead of buying 16 4TB drives. Also 16 3.5 inch HDD drives draw around 200W of power alone.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    PSU Power Supply Unit
    RAID Redundant Array of Independent Disks for mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    4 acronyms in this thread; the most compressed thread commented on today has 19 acronyms.

    [Thread #156 for this comm, first seen 11th Mar 2026, 21:50] [FAQ] [Full list] [Contact] [Source code]

  • Ferrous@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Hey, you basically defined my system.

    Truenas scale machine running 4x 16TB drives. I use a cheap rosewill 4u server rack case. It has hot swap drive bays in front. Big plus.

    The brain is an amd 5950x running on an asrock x570 steel legend w/ 128GB of the cheapest crucial DDR4 ECC I could find. Also running an rtx 2080 for jellyfin transcoding.

    My consumer mobo is the bottleneck. Given how my end goal is to have a 10gb nic and an LSI card for more sata ports, I’m going to have to get creative with m.2 ports. I might plug a 10gb nic into an m.2 port.

    PSU was a 1kW fractal platinum rated. Way overkill, but the high efficiency is key.

    You’ll notice my build uses a lot of gaming parts - i simply harvested my old parts when I upgraded my gaming PC. Despite this, it still idles under 200 watts. My point is not that you should seek out gaming parts, but if you happen to have any on hand, they could be effectively leveraged given price increases on new parts.

    The biggest thing is: Use ECC. This is non negotiable for your setup. ECC saved me a couple weeks ago when my 5950x shot craps, randomly. So far no issues after increasing to a set voltage. ZFS and ECC go together like peas in a pod.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Just in case you dont know most drives aren’t rated for this many in one case.

  • KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Honestly, you might want to look into proper server hardware. There are many out there that support dozens of drives, assuming you’re willing to go with a blade. Even if you explicitly want a tower, server hardware is where you’re going to get the best support.

    You’ll most likely also want to increase the size of your drives. Assuming you’re being smart and utilizing RAID, you’re going to be losing a bunch of that storage.