EDIT 15/07/2015: New update here. The finished result: This is a project I've been planning for years, but have only recently had the funds to go ahead (and a pressing need to solve a problem). It is anything but unique - I'm not exactly breaking new ground here! But I do think it's interesting enough to bother sharing, so I decided to write about this in case it might be useful to someone else. While there are lots of people on here who can do this stuff in their sleep, there are still more who are on a learning curve, devouring new information as fast as they can find it. This is in no way a guide to how anyone else should design their own storage solution - it's just what I've done, and hopefully sharing it will be helpful to someone out there. Besides, writing is a good way to clarify my thinking I've been running out of storage for a long time, but I haven't had the funds to create (what I consider to be) a good, reliable storage solution until now, so I've been biding my time and reading. Following a recent screwup of Windows that damaged the file system, now it's time to upgrade. Currently I'm running a workstation with a Samsung 850Pro SSD for the OS, with an 8-port Areca RAID controller running 8 1TB HDDs in RAID5. There is no user data stored on the SSD - everything is on the RAID array. I could completely lose the SSD without breaking a sweat - I can have Windows running on a replacement drive in less than an hour if I have to install from scratch (or about 10 minutes restoring from a disk image). The Areca controller scrubs the array against parity data, so I thought I was fairly well protected against common causes of data loss. The size of the HDDs means rebuilding the array doesn't take too long, limiting exposure due to a failed drive. And for a workstation, it's not a bad system (RAID6 would be better, but the original array was set up with five disks and I added three extras later - you can't add a second parity disk without trashing the array and starting from scratch). What I've come to realise, though, is that keeping data on your PC introduces significant risk. As I mentioned, Windows screwed up for an unknown reason late last year - I booted the PC up and chkdsk popped up to tell me there were errors on the drives. I didn't think too much of it, so allowed it to scan them. About 18 hours later, I was back up and running, but within a few minutes I had noticed a number of 'zero size' files. I told chkdsk to run another thorough scan and left it for another day. When it came back up, most of the files I cared about had been restored, but quite a lot hadn't. Fortunately all my personal data was recently backed up, but I didn't have enough storage for all the movies I had on there; I estimate about 5% was lost. Thorough sector scanning tools were unable to recover more than a handful of photos - no large files were recoverable. With client systems I push for full, live data replication between physical boxes (offsite where possible), incremental quarter-hourly backups to internal backup drives and nightly backups to external storage or offsite systems. That's on top of mirrored arrays, of course. I haven't had the luxury of adequate disposable funds to afford 7TB of external backup capacity, so I had to live with the tradeoff of backing up the much smaller amount of data that was truly irreplaceable and relying on the array never failing to keep the movies secure (which can always be ripped again anyway, it's just annoying). The lesson I've had here is that data storage must be separated from the PC - if Windows glitches and it affects the file system, you could have significant data loss, no matter how robustly you design your storage subsystem. Basically I should follow my own advice to clients - pony up the cash to do it right because the risk of data loss is simply unacceptably high otherwise. I've been following various NAS trends for a few years on the sidelines and have been impressed with how far FreeNAS has come. I'm a Microsoft technology professional, so am very familiar with everything Windows, but ReFS isn't mature enough for my liking (and performance is terrible for 'parity' systems, so you need 100% redundant mirroring to avoid a performance bottleneck) and I want self-healing features such as provided with ZFS (a long time ago I had a faulty drive that caused slow, unnoticeable bit rot which wasn't picked up until long after the drive had been replaced. Backups had copied the bad data too. Fortunately it was nothing I couldn't live without or recreate, but I've never trusted a single HDD with data since). For similar reasons to ReFS, BTRFS is also off the table due to immaturity. Now I'm new to FreeBSD - in fact, I have very little Linux knowledge or experience generally - what I do know has been garnered from riding shotgun while other professionals do their thing (usually interfacing with Microsoft systems I am somehow involved with). So I am far from an expert in this area - if you're new to FreeNAS yourself, take the following as a general guide to give you a basic introduction only. Do lots of reading yourself, once you've exhausted everything you can figure out yourself, go ask some stupid questions of people who have lots of experience and knowledge, such as on the FreeNAS forums. Now, if you're wondering why I'm not buying an off-the-shelf NAS, it's because either the economics don't work or I can't find something that will do the job. Even an old Atom or Celeron system with 8GB of RAM and just four HDD capacity can set you back $1000 before you've added storage. If you want something as powerful as I've designed here, you're talking two to three times my component cost from a storage vendor. You're also at the mercy of the manufacturer for support if things don't work the way you expect - I've supplied a few dozen prebuilt NAS systems over the years and there have been some that have worked fine, but others that have had problems from day one and the manufacturer has been utterly incapable of providing (what I would consider to be) reasonable support. Sorry, if I buy your gear, I want you in the ring with me all the way. Telling me I'm 'on my own' when things seem to get hard (and the device is literally incapable of performing functions that are printed on the outside of the box) you have lost me as a customer and VAR forever. As to my particular build requirements, I need at least two disks of redundancy, it needs to be highly available and extremely reliable, it needs to be of adequate size for the data I have and at least three years of growth, I want it to be reasonably fast for the use I will put it to, fairly quiet and relatively low power. For those new to this, ZFS needs lots of RAM (note that there's some contention that the amount of RAM recommended by IX Systems and the FreeNAS community is, strictly speaking, for a home NAS, significant overkill. If you're running a 'pure' NAS with no extra jails or features, you may be able to cut back on your RAM requirements relatively safely. However, if you're running additional services and features, numerous jails and things like PLEX, I would recommend erring on the side of throwing more RAM at it than less. Given that even 32GB of ECC memory will set you back less around the cost of a single 6TB HDD, IMO that is money very well spent). Because reliability of data is crucial, you want ECC memory (don't get me started on this - if you want to argue about the merits of ECC memory for reliable data storage, go and do some reading, because you're wrong. ECC is fundamental. No ECC means you might assume your data is good, but you don't (and can't) know. I need to know). With the Avoton Atom series of Xeon processors, Intel brings server features in a very low power (and relatively inexpensive) package. Most importantly, that means ECC server memory available on Mini-ITX. I'm also looking for something with sufficient SATA ports onboard and an Intel NIC (if you're unsure why an Intel NIC is crucial, if you want fast and highly reliable networking that won't keep dragging at the CPU, other manufacturers sometimes get it right, but Intel is extremely reliable in this space (and Realtek is a sad joke)). So the first piece of this puzzle is the Asrock C2750D4I, an eight-core Avoton microserver platform. The C2550D4I would be adequate if I was only going to use this for files, but it may be required to push out streaming media (I'll see how that goes once it's built and I can run it through its paces). The C2750D4I is up to the task, while the '2550' is questionable on that front. For the extra $150 I'm erring on the side of 'can do'. Being an Intel microserver platform, it has masses of SATA connectors, but a crucial 6 are on Intel SATA controllers. It also includes an Intel i210 NIC with two ports that support link aggregation if I need it - I have a layer2 switch so I can enable this if required. It also has BMC (Baseboard Management Control(ler)) in the form of a built-in ASPEED AST2300 - most workstations I deploy and maintain have vPro and all servers we manage have BMC, and I don't ever want to go back to the days of supporting boxes without out-of-band management. I would have lived without it if my 'best build' for this system didn't/couldn't include it, but having it is a huge advantage. Building on a server platform like this (that is surprisingly cheap) is hugely rewarding and delivers tremendous confidence. For drive selection, there is only one name I am really interested in - HGST. Ideally, if money was no object, I'd use lots of 2.5" disks - power draw is significantly lower (averaged per TB) while throughput is vastly improved with the right SAS backplane. But sadly money is a significant consideration, so I have to work with the far more reasonably priced 3.5" disk options. Hitachi's 8TB enterprise drives are very exciting, but needing a minimum of six disks, the enterprise drives utterly blew my budget. More realistically, their new 'NAS' series is very affordable and perfectly adequate for my home NAS and I managed to squeeze enough out of my funding pool to go to the 6TB model. They're still fast enough and should be very reliable, while 24TB should be more than sufficient for the next three to five years given the rate at which my data has grown over the past decade. As to the number-resulting-in-six disk thing, with two parity drives (RAIDZ2) that leaves the data striped across four disks. Earlier versions of ZFS had 'sweet spot' drive configurations of four disk stripes, so RAIDZ1 should have had 5 or 9 disks, RAIDZ2 should have 6 or 10. Apparently the latest versions of OpenZFS mitigate the issues that required these optimisations, but there's very little downside to erring on the side of caution and sticking with the original configuration recommendations. Besides, six disks worked out quite well when determining the budget. As I mentioned earlier, ZFS wants lots of RAM to play with (allegedly, although again, if you want to understand why, I direct you to the FreeNAS manual and forums, where some great introductory guides are available). Adding additional services and features like PLEX also come with significant memory overhead. This board allows up to 64GB to be installed, which is fantastic, but 16GB modules blow the budget right now. 32GB should be perfectly adequate for what I'm doing anyway - this thing won't be handling lots of parallel workloads, it will mostly be reading and writing sequential data from one or two users at a time. I've gone for Crucial here, as it is the most reliable brand on the QVL for that board. At this point I should probably consider what chassis to house this in. There are surprisingly few Mini-ITX cases that are designed for NAS duty. Originally I wanted to stick this in a 1RU 19" rackmount blade server chassis with hot-swap HDD bays, but was quickly put off that seconds later when I looked up a pricelist and realised what I'd be up for. While a nice idea, that was an unnecessary waste of good build money for me at this point. I have found some 2RU chassis that would do the job, but none with the correct number of drive bays, nor the flexibility to allow me to customise them (rack chassis and servers aren't designed with the sorts of flexibility that PCs have). So far, the only thing I've found that allows me to build precisely what I'd like is a 4RU chassis, and that's just stupid large. So I'm going to keep looking and I may very well transplant everything into a 2RU chassis a little down the track if I can find the right configuration. (Just a note here about cooling and noise - six 6TB HDDs can get rather warm, and it's important to keep drives as cool as possible for longevity. Stacking them in a 2RU rack chassis means you're going to have to run small fans at very high speed to get adequate cooling through it (if you can't cool them sufficiently you'll be fighting a losing battle against heat soak). In short, don't build a low-profile rack NAS unless you have a rack in a cooled server room to stick it. Seriously. I've seen people run repurposed rack server gear in their house and it sounds like they're a jet hobbyist...) So with a rack chassis off the table, I've settled on the Lian-Li PC-Q35B. With 5 5.25" bays I can get up to 8 3.5" drives in hot-swap bays. I'll only need six of those, so a five disk mobile rack with an additional single-bay hot-swap tray gives me room for the six 3.5" drives, leaving one 5.25" bay spare (which I'll come back to shortly). For the PSU, I'm in overkill territory, but for the extra $50 over what I was going to spend anyway that's ok - I'm using a SeaSonic SS-660XP2 F3 80Plus Platinum 660W PSU (I would have gone down to 450 or 500W happily, if they made such a thing). As one of the most rock-stable PSU models I've ever seen, this NAS will have nothing but clean, stable power that I can bank on. At a bit over half the price of just one of the HDDs, I'm quite happy with the cost-to-confidence ratio there Finally, there are the OS drive(s). When installing FreeNAS you can select more than one drive and it will create a mirror on the second drive. This provides significant reliability in case one drive fails - you don't have to reconfigure a fresh FreeNAS install from scratch. Couldn't be easier. As you may know, FreeNAS is usually run from a USB flash drive (or in our case, two). With recent versions of FreeNAS you can now happily select a SSD(s) instead, which is great, but as FreeNAS doesn't take up much space (16GB is more than sufficient, including future upgrades) it is a bit of an expensive prospect for almost zero performance improvement in a running box. You see, the reason FreeNAS is run from USB flash in the first place is because the OS resides entirely in memory after initial bootup - there is no drive access to speak of unless you make permanent configuration changes or upgrade the OS. So performance isn't going to suffer running from USB Flash. Upgrades will - instead of a few minutes, they might take twenty minutes to half an hour, apparently. But if you can live with that, the only other tangible advantage is the greater reliability of SSDs, which while a substantial difference, still isn't something you're likely to ever need to worry about. It's a difference of maybe $20 for two 16GB flash drives, to maybe $60 if you want to get two top-end 32GB drives, vs $115 to $150 for a pair of entry-level SSDs. That's a heck of a difference. Were we still at parity with the USD, a pair of SSDs would be far more palatable, but prices have gone up 30 to 35%. Oh and in case you're wondering, FreeNAS won't boot from USB 3.0 at this point - in fact, USB 3.0 is not considered well enough supported even for writing to external backup drives. USB 3.0 flash drives will work fine in USB 2.0 ports, though (just not the other way around). Although there are lots of reports of the installer being finicky in USB 3.0 drives - I personally had an issue with my Kingston HyperX drive, where an Apacer USB 2.0 drive was able to run the installer successfully. So be warned - USB 3.0 gear should be avoided unless you do your research and find a drive that lots of people using FreeNAS have had success with. So now I'm going to go and do the opposite of what I suggested - after evaluating my options for a few days I found a couple of Transcend SSD370 64GB drives, which are just $112 for the pair. While a hell of a lot more than $30, I was actually leaning towards a pair of Corsair 32GB drives which were closer to $35 each (for reliability, as performance would be identical to pretty much everything else maxing out on USB 2.0), so that $80 difference is really more like $40, and can be easily justified in my case for time saving - FreeNAS was upgraded no less than four times in just one week in February, which would have knocked my NAS offline for two to three hours all up if I was upgrading USB 2.0 drives. If I use SSDs I slash that time dramatically, so I can justify that cost difference. You may disagree entirely - that's why I put forward both arguments, so you can see why you might prefer to go one way or the other. Neither is wrong - it depends on your own circumstances and choices (and budget). You may be wondering where SSDs are used for caching - if you know about ZFS already you may be familiar with the terms ZIL and L2ARC. For the interested reader wanting to understand the use of ZFS in a large multi-user environment or as a database store, I direct you to read and understand these concepts thoroughly. For the casual reader, all you need to know is the ZIL (ZFS Intent Log) is a write cache and the L2ARC (Level 2 Adaptive (or Adjustable) Replacement Cache) is a random read cache, you need a discreet drive for each (ie you never put them on the same SSD, or an SSD that shares functions with anything else) and both will provide precisely zero benefit for my usage (and indeed nearly every home NAS is in the same boat). Yes, for multi-user environments hammering the NAS at the same time, for database servers, these are indispensable technologies. For a home NAS they are a complete waste of time and money. One extra disk that has snuck into the build is the backup drive. I have around 3.5TB of non-movie data to backup. Choosing a 6TB drive gives me plenty of room to grow without maxing it out any time soon. It also means the drive isn't going to be suffering terrible performance penalties for being too full. As I mentioned, FreeNAS doesn't play nicely with USB 3.0 yet, and I want an easily removable drive for this function. I could have used another hot-swap bay and just been satisfied with pulling the bare drive out in case of emergency, but once I decided to use the two SSDs instead of USB flash drives I no longer had a free bay to use. eSATA seems the way to go, and is recommended by several people in the FreeNAS community. An eSATA card and caddy aren't too expensive, so I'll give them a shot and see how they go - if everything works reliably then great! Hopefully in the not too distant future USB 3.0 will be properly supported and I can then fall back on that if necessary. Because I'm intending to do partial backups to this external drive, I'm not intending to use snapshots of the whole zpool. Instead, I'm going to script rsync to backup and synchronise specific folders (essentially everything other than the Video folder). The first time this is run will take hours, but after that it should be mere minutes. I'm only going to schedule it to sync once a day at about 3am - while I could schedule it to sync every hour, which would minimise the time each sync occurred, it is extremely doubtful anything I'm doing will be that unrecoverable or sensitive that I need such hyper-paranoid backups. I will also minimise wear on the disk by having it only spin up once a day. (There is research that shows HDD life is extended by reducing the number of times a drive is spun down, so that drives that remain online 24/7 can actually have longer lives than those in PCs that come on and off throughout the day and take a break at night. I don't really want that drive operating constantly though, so having it come on just once a day to sync and then spin down is the best compromise). The final consideration isn't really part of the build at all, but it is fairly critical: the UPS. Any and every data storage device should derive its power from a UPS, particularly those using some sort of RAID with write caching. Ideally you'll have a high-efficiency UPS providing clean power with excellent power filtering circuitry. I highly recommend an 'online' or 'double conversion' UPS - these convert AC to DC, store it in the battery(ies) and then convert DC back to AC, resulting in a perfect waveform that doesn't fluctuate. They are also far less damaging to batteries - an 'online' UPS will give you two to three times the battery lifespan than a 'line-interactive' UPS will. So while they are initially quite expensive, they pay off in the long run (well, as long as you stick to entry level models - you'll never get your money back on a 10,000VA rack model!). So here's my component list for easy reference: ASRock C2750D4I Mini-ITX (Avoton Atom C2750) Lian-Li PC-Q35B case Crucial 8GB DDR3 ECC 1600MHz 1.35v RAM x4 (32GB) HGST Deskstar NAS 6TB HDD x6 (RAIDZ2) WD Caviar Green 6TB (external backup drive) Transcend SSD370 64GB x2 (OS drives) Seasonic SS-660XP2 F3 80Plus Platinum 660W PSU HDD Mobile Rack SATA/SAS 5 x hot-swap drive bays in 3 x 5.25" Welland EZStor ME-751J Trayless HDD Rack 5.25" Bay for 3.5" SATA Welland EZStor ME-240 Dual 2.5" SATA mobile rack Vantec NST-330SU3-BK NexStar 3.5" eSATA external enclosure Astrotek eSATA PCI-Express card 5.25" to 2.5" bay adapter And now for some photos (or it didn't happen, right?) Yes, my photos are lame. I'm only putting them here so you can see what it ended up looking like. This is definitely a build that works perfectly well without photos, but I anticipate that if I didn't include them it would be the first question asked. So photos provided. Don't flame me over them - I don't care. Here's the collection of parts: And stripped bare of their packaging: Assembled, but prior to PSU installation: With PSU (not much to see here): Drives ready to go: All locked up and ready for action! Happy to answer any questions / receive feedback etc. although I'll readily admit that questions about FreeNAS will most likely be referred to their support community - I happily admit I'm not an expert on the topic!