New FreeNAS Build

Discussion in 'Storage & Backup' started by SiliconAngel, Apr 11, 2015.

  1. SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    627
    Location:
    Perth, Western Australia
    EDIT 15/07/2015: New update here.

    The finished result:
    [​IMG]

    This is a project I've been planning for years, but have only recently had the funds to go ahead (and a pressing need to solve a problem). It is anything but unique - I'm not exactly breaking new ground here! But I do think it's interesting enough to bother sharing, so I decided to write about this in case it might be useful to someone else. While there are lots of people on here who can do this stuff in their sleep, there are still more who are on a learning curve, devouring new information as fast as they can find it. This is in no way a guide to how anyone else should design their own storage solution - it's just what I've done, and hopefully sharing it will be helpful to someone out there. Besides, writing is a good way to clarify my thinking ;)

    I've been running out of storage for a long time, but I haven't had the funds to create (what I consider to be) a good, reliable storage solution until now, so I've been biding my time and reading. Following a recent screwup of Windows that damaged the file system, now it's time to upgrade. Currently I'm running a workstation with a Samsung 850Pro SSD for the OS, with an 8-port Areca RAID controller running 8 1TB HDDs in RAID5. There is no user data stored on the SSD - everything is on the RAID array. I could completely lose the SSD without breaking a sweat - I can have Windows running on a replacement drive in less than an hour if I have to install from scratch (or about 10 minutes restoring from a disk image). The Areca controller scrubs the array against parity data, so I thought I was fairly well protected against common causes of data loss. The size of the HDDs means rebuilding the array doesn't take too long, limiting exposure due to a failed drive. And for a workstation, it's not a bad system (RAID6 would be better, but the original array was set up with five disks and I added three extras later - you can't add a second parity disk without trashing the array and starting from scratch).

    What I've come to realise, though, is that keeping data on your PC introduces significant risk. As I mentioned, Windows screwed up for an unknown reason late last year - I booted the PC up and chkdsk popped up to tell me there were errors on the drives. I didn't think too much of it, so allowed it to scan them. About 18 hours later, I was back up and running, but within a few minutes I had noticed a number of 'zero size' files. I told chkdsk to run another thorough scan and left it for another day. When it came back up, most of the files I cared about had been restored, but quite a lot hadn't. Fortunately all my personal data was recently backed up, but I didn't have enough storage for all the movies I had on there; I estimate about 5% was lost. Thorough sector scanning tools were unable to recover more than a handful of photos - no large files were recoverable.

    With client systems I push for full, live data replication between physical boxes (offsite where possible), incremental quarter-hourly backups to internal backup drives and nightly backups to external storage or offsite systems. That's on top of mirrored arrays, of course. I haven't had the luxury of adequate disposable funds to afford 7TB of external backup capacity, so I had to live with the tradeoff of backing up the much smaller amount of data that was truly irreplaceable and relying on the array never failing to keep the movies secure (which can always be ripped again anyway, it's just annoying). The lesson I've had here is that data storage must be separated from the PC - if Windows glitches and it affects the file system, you could have significant data loss, no matter how robustly you design your storage subsystem. Basically I should follow my own advice to clients - pony up the cash to do it right because the risk of data loss is simply unacceptably high otherwise.

    I've been following various NAS trends for a few years on the sidelines and have been impressed with how far FreeNAS has come. I'm a Microsoft technology professional, so am very familiar with everything Windows, but ReFS isn't mature enough for my liking (and performance is terrible for 'parity' systems, so you need 100% redundant mirroring to avoid a performance bottleneck) and I want self-healing features such as provided with ZFS (a long time ago I had a faulty drive that caused slow, unnoticeable bit rot which wasn't picked up until long after the drive had been replaced. Backups had copied the bad data too. Fortunately it was nothing I couldn't live without or recreate, but I've never trusted a single HDD with data since). For similar reasons to ReFS, BTRFS is also off the table due to immaturity.

    Now I'm new to FreeBSD - in fact, I have very little Linux knowledge or experience generally - what I do know has been garnered from riding shotgun while other professionals do their thing (usually interfacing with Microsoft systems I am somehow involved with). So I am far from an expert in this area - if you're new to FreeNAS yourself, take the following as a general guide to give you a basic introduction only. Do lots of reading yourself, once you've exhausted everything you can figure out yourself, go ask some stupid questions of people who have lots of experience and knowledge, such as on the FreeNAS forums.

    Now, if you're wondering why I'm not buying an off-the-shelf NAS, it's because either the economics don't work or I can't find something that will do the job. Even an old Atom or Celeron system with 8GB of RAM and just four HDD capacity can set you back $1000 before you've added storage. If you want something as powerful as I've designed here, you're talking two to three times my component cost from a storage vendor. You're also at the mercy of the manufacturer for support if things don't work the way you expect - I've supplied a few dozen prebuilt NAS systems over the years and there have been some that have worked fine, but others that have had problems from day one and the manufacturer has been utterly incapable of providing (what I would consider to be) reasonable support. Sorry, if I buy your gear, I want you in the ring with me all the way. Telling me I'm 'on my own' when things seem to get hard (and the device is literally incapable of performing functions that are printed on the outside of the box) you have lost me as a customer and VAR forever.

    As to my particular build requirements, I need at least two disks of redundancy, it needs to be highly available and extremely reliable, it needs to be of adequate size for the data I have and at least three years of growth, I want it to be reasonably fast for the use I will put it to, fairly quiet and relatively low power.

    For those new to this, ZFS needs lots of RAM (note that there's some contention that the amount of RAM recommended by IX Systems and the FreeNAS community is, strictly speaking, for a home NAS, significant overkill. If you're running a 'pure' NAS with no extra jails or features, you may be able to cut back on your RAM requirements relatively safely. However, if you're running additional services and features, numerous jails and things like PLEX, I would recommend erring on the side of throwing more RAM at it than less. Given that even 32GB of ECC memory will set you back less around the cost of a single 6TB HDD, IMO that is money very well spent). Because reliability of data is crucial, you want ECC memory (don't get me started on this - if you want to argue about the merits of ECC memory for reliable data storage, go and do some reading, because you're wrong. ECC is fundamental. No ECC means you might assume your data is good, but you don't (and can't) know. I need to know).

    With the Avoton Atom series of Xeon processors, Intel brings server features in a very low power (and relatively inexpensive) package. Most importantly, that means ECC server memory available on Mini-ITX. I'm also looking for something with sufficient SATA ports onboard and an Intel NIC (if you're unsure why an Intel NIC is crucial, if you want fast and highly reliable networking that won't keep dragging at the CPU, other manufacturers sometimes get it right, but Intel is extremely reliable in this space (and Realtek is a sad joke)).

    So the first piece of this puzzle is the Asrock C2750D4I, an eight-core Avoton microserver platform. The C2550D4I would be adequate if I was only going to use this for files, but it may be required to push out streaming media (I'll see how that goes once it's built and I can run it through its paces). The C2750D4I is up to the task, while the '2550' is questionable on that front. For the extra $150 I'm erring on the side of 'can do'.

    Being an Intel microserver platform, it has masses of SATA connectors, but a crucial 6 are on Intel SATA controllers. It also includes an Intel i210 NIC with two ports that support link aggregation if I need it - I have a layer2 switch so I can enable this if required. It also has BMC (Baseboard Management Control(ler)) in the form of a built-in ASPEED AST2300 - most workstations I deploy and maintain have vPro and all servers we manage have BMC, and I don't ever want to go back to the days of supporting boxes without out-of-band management. I would have lived without it if my 'best build' for this system didn't/couldn't include it, but having it is a huge advantage. Building on a server platform like this (that is surprisingly cheap) is hugely rewarding and delivers tremendous confidence.

    For drive selection, there is only one name I am really interested in - HGST. Ideally, if money was no object, I'd use lots of 2.5" disks - power draw is significantly lower (averaged per TB) while throughput is vastly improved with the right SAS backplane. But sadly money is a significant consideration, so I have to work with the far more reasonably priced 3.5" disk options.

    Hitachi's 8TB enterprise drives are very exciting, but needing a minimum of six disks, the enterprise drives utterly blew my budget. More realistically, their new 'NAS' series is very affordable and perfectly adequate for my home NAS and I managed to squeeze enough out of my funding pool to go to the 6TB model. They're still fast enough and should be very reliable, while 24TB should be more than sufficient for the next three to five years given the rate at which my data has grown over the past decade.

    As to the number-resulting-in-six disk thing, with two parity drives (RAIDZ2) that leaves the data striped across four disks. Earlier versions of ZFS had 'sweet spot' drive configurations of four disk stripes, so RAIDZ1 should have had 5 or 9 disks, RAIDZ2 should have 6 or 10. Apparently the latest versions of OpenZFS mitigate the issues that required these optimisations, but there's very little downside to erring on the side of caution and sticking with the original configuration recommendations. Besides, six disks worked out quite well when determining the budget. ;)

    As I mentioned earlier, ZFS wants lots of RAM to play with (allegedly, although again, if you want to understand why, I direct you to the FreeNAS manual and forums, where some great introductory guides are available). Adding additional services and features like PLEX also come with significant memory overhead. This board allows up to 64GB to be installed, which is fantastic, but 16GB modules blow the budget right now. 32GB should be perfectly adequate for what I'm doing anyway - this thing won't be handling lots of parallel workloads, it will mostly be reading and writing sequential data from one or two users at a time. I've gone for Crucial here, as it is the most reliable brand on the QVL for that board.

    At this point I should probably consider what chassis to house this in. There are surprisingly few Mini-ITX cases that are designed for NAS duty. Originally I wanted to stick this in a 1RU 19" rackmount blade server chassis with hot-swap HDD bays, but was quickly put off that seconds later when I looked up a pricelist and realised what I'd be up for. While a nice idea, that was an unnecessary waste of good build money for me at this point. I have found some 2RU chassis that would do the job, but none with the correct number of drive bays, nor the flexibility to allow me to customise them (rack chassis and servers aren't designed with the sorts of flexibility that PCs have). So far, the only thing I've found that allows me to build precisely what I'd like is a 4RU chassis, and that's just stupid large. So I'm going to keep looking and I may very well transplant everything into a 2RU chassis a little down the track if I can find the right configuration. (Just a note here about cooling and noise - six 6TB HDDs can get rather warm, and it's important to keep drives as cool as possible for longevity. Stacking them in a 2RU rack chassis means you're going to have to run small fans at very high speed to get adequate cooling through it (if you can't cool them sufficiently you'll be fighting a losing battle against heat soak). In short, don't build a low-profile rack NAS unless you have a rack in a cooled server room to stick it. Seriously. I've seen people run repurposed rack server gear in their house and it sounds like they're a jet hobbyist...)

    So with a rack chassis off the table, I've settled on the Lian-Li PC-Q35B. With 5 5.25" bays I can get up to 8 3.5" drives in hot-swap bays. I'll only need six of those, so a five disk mobile rack with an additional single-bay hot-swap tray gives me room for the six 3.5" drives, leaving one 5.25" bay spare (which I'll come back to shortly).

    For the PSU, I'm in overkill territory, but for the extra $50 over what I was going to spend anyway that's ok - I'm using a SeaSonic SS-660XP2 F3 80Plus Platinum 660W PSU (I would have gone down to 450 or 500W happily, if they made such a thing). As one of the most rock-stable PSU models I've ever seen, this NAS will have nothing but clean, stable power that I can bank on. At a bit over half the price of just one of the HDDs, I'm quite happy with the cost-to-confidence ratio there ;)

    Finally, there are the OS drive(s). When installing FreeNAS you can select more than one drive and it will create a mirror on the second drive. This provides significant reliability in case one drive fails - you don't have to reconfigure a fresh FreeNAS install from scratch. Couldn't be easier.

    As you may know, FreeNAS is usually run from a USB flash drive (or in our case, two). With recent versions of FreeNAS you can now happily select a SSD(s) instead, which is great, but as FreeNAS doesn't take up much space (16GB is more than sufficient, including future upgrades) it is a bit of an expensive prospect for almost zero performance improvement in a running box.

    You see, the reason FreeNAS is run from USB flash in the first place is because the OS resides entirely in memory after initial bootup - there is no drive access to speak of unless you make permanent configuration changes or upgrade the OS. So performance isn't going to suffer running from USB Flash. Upgrades will - instead of a few minutes, they might take twenty minutes to half an hour, apparently. But if you can live with that, the only other tangible advantage is the greater reliability of SSDs, which while a substantial difference, still isn't something you're likely to ever need to worry about. It's a difference of maybe $20 for two 16GB flash drives, to maybe $60 if you want to get two top-end 32GB drives, vs $115 to $150 for a pair of entry-level SSDs. That's a heck of a difference. Were we still at parity with the USD, a pair of SSDs would be far more palatable, but prices have gone up 30 to 35%.

    Oh and in case you're wondering, FreeNAS won't boot from USB 3.0 at this point - in fact, USB 3.0 is not considered well enough supported even for writing to external backup drives. USB 3.0 flash drives will work fine in USB 2.0 ports, though (just not the other way around). Although there are lots of reports of the installer being finicky in USB 3.0 drives - I personally had an issue with my Kingston HyperX drive, where an Apacer USB 2.0 drive was able to run the installer successfully. So be warned - USB 3.0 gear should be avoided unless you do your research and find a drive that lots of people using FreeNAS have had success with.

    So now I'm going to go and do the opposite of what I suggested - after evaluating my options for a few days I found a couple of Transcend SSD370 64GB drives, which are just $112 for the pair. While a hell of a lot more than $30, I was actually leaning towards a pair of Corsair 32GB drives which were closer to $35 each (for reliability, as performance would be identical to pretty much everything else maxing out on USB 2.0), so that $80 difference is really more like $40, and can be easily justified in my case for time saving - FreeNAS was upgraded no less than four times in just one week in February, which would have knocked my NAS offline for two to three hours all up if I was upgrading USB 2.0 drives. If I use SSDs I slash that time dramatically, so I can justify that cost difference. You may disagree entirely - that's why I put forward both arguments, so you can see why you might prefer to go one way or the other. Neither is wrong - it depends on your own circumstances and choices (and budget).

    You may be wondering where SSDs are used for caching - if you know about ZFS already you may be familiar with the terms ZIL and L2ARC. For the interested reader wanting to understand the use of ZFS in a large multi-user environment or as a database store, I direct you to read and understand these concepts thoroughly. For the casual reader, all you need to know is the ZIL (ZFS Intent Log) is a write cache and the L2ARC (Level 2 Adaptive (or Adjustable) Replacement Cache) is a random read cache, you need a discreet drive for each (ie you never put them on the same SSD, or an SSD that shares functions with anything else) and both will provide precisely zero benefit for my usage (and indeed nearly every home NAS is in the same boat). Yes, for multi-user environments hammering the NAS at the same time, for database servers, these are indispensable technologies. For a home NAS they are a complete waste of time and money.

    One extra disk that has snuck into the build is the backup drive. I have around 3.5TB of non-movie data to backup. Choosing a 6TB drive gives me plenty of room to grow without maxing it out any time soon. It also means the drive isn't going to be suffering terrible performance penalties for being too full. As I mentioned, FreeNAS doesn't play nicely with USB 3.0 yet, and I want an easily removable drive for this function. I could have used another hot-swap bay and just been satisfied with pulling the bare drive out in case of emergency, but once I decided to use the two SSDs instead of USB flash drives I no longer had a free bay to use. eSATA seems the way to go, and is recommended by several people in the FreeNAS community. An eSATA card and caddy aren't too expensive, so I'll give them a shot and see how they go - if everything works reliably then great! Hopefully in the not too distant future USB 3.0 will be properly supported and I can then fall back on that if necessary.

    Because I'm intending to do partial backups to this external drive, I'm not intending to use snapshots of the whole zpool. Instead, I'm going to script rsync to backup and synchronise specific folders (essentially everything other than the Video folder). The first time this is run will take hours, but after that it should be mere minutes. I'm only going to schedule it to sync once a day at about 3am - while I could schedule it to sync every hour, which would minimise the time each sync occurred, it is extremely doubtful anything I'm doing will be that unrecoverable or sensitive that I need such hyper-paranoid backups. I will also minimise wear on the disk by having it only spin up once a day. (There is research that shows HDD life is extended by reducing the number of times a drive is spun down, so that drives that remain online 24/7 can actually have longer lives than those in PCs that come on and off throughout the day and take a break at night. I don't really want that drive operating constantly though, so having it come on just once a day to sync and then spin down is the best compromise).

    The final consideration isn't really part of the build at all, but it is fairly critical: the UPS. Any and every data storage device should derive its power from a UPS, particularly those using some sort of RAID with write caching. Ideally you'll have a high-efficiency UPS providing clean power with excellent power filtering circuitry. I highly recommend an 'online' or 'double conversion' UPS - these convert AC to DC, store it in the battery(ies) and then convert DC back to AC, resulting in a perfect waveform that doesn't fluctuate. They are also far less damaging to batteries - an 'online' UPS will give you two to three times the battery lifespan than a 'line-interactive' UPS will. So while they are initially quite expensive, they pay off in the long run (well, as long as you stick to entry level models - you'll never get your money back on a 10,000VA rack model!).

    So here's my component list for easy reference:

    ASRock C2750D4I Mini-ITX (Avoton Atom C2750)
    Lian-Li PC-Q35B case
    Crucial 8GB DDR3 ECC 1600MHz 1.35v RAM x4 (32GB)
    HGST Deskstar NAS 6TB HDD x6 (RAIDZ2)
    WD Caviar Green 6TB (external backup drive)
    Transcend SSD370 64GB x2 (OS drives)
    Seasonic SS-660XP2 F3 80Plus Platinum 660W PSU
    HDD Mobile Rack SATA/SAS 5 x hot-swap drive bays in 3 x 5.25"
    Welland EZStor ME-751J Trayless HDD Rack 5.25" Bay for 3.5" SATA
    Welland EZStor ME-240 Dual 2.5" SATA mobile rack
    Vantec NST-330SU3-BK NexStar 3.5" eSATA external enclosure
    Astrotek eSATA PCI-Express card
    5.25" to 2.5" bay adapter

    And now for some photos (or it didn't happen, right?) Yes, my photos are lame. I'm only putting them here so you can see what it ended up looking like. This is definitely a build that works perfectly well without photos, but I anticipate that if I didn't include them it would be the first question asked. So photos provided. Don't flame me over them - I don't care.

    Here's the collection of parts:
    [​IMG]

    And stripped bare of their packaging:
    [​IMG]

    Assembled, but prior to PSU installation:
    [​IMG]

    With PSU (not much to see here):
    [​IMG]

    Drives ready to go:
    [​IMG]

    All locked up and ready for action!
    [​IMG]

    Happy to answer any questions / receive feedback etc. although I'll readily admit that questions about FreeNAS will most likely be referred to their support community - I happily admit I'm not an expert on the topic!
     
    Last edited: Jul 27, 2015
  2. BigDave

    BigDave Member

    Joined:
    Nov 12, 2007
    Messages:
    3,701
    Location:
    ADELAIDE/5018
    That's a cracker little set up mate :thumbup:

    Where did you get the mobo from?
     
  3. KudOZ

    KudOZ Member

    Joined:
    Jun 28, 2001
    Messages:
    170
    Location:
    Brisbane, Australia
    Looks beautiful mate, got an estimate for the overall cost?
     
  4. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    627
    Location:
    Perth, Western Australia
    Thanks guys :)

    I purchased mine from Altech Computers, an ASRock distributor. But you can find them available in Australia easily enough.

    Around $4.5k. There's $550 in memory, nearly 3k in drives, $200 PSU, $500 mainboard, the rest in case, hotswap bays etc. About $5k pricing retail, although if you shop around you could get close.
     
  5. jtir

    jtir Member

    Joined:
    Oct 1, 2003
    Messages:
    1,167
    Location:
    Sydney
    Very well thought out build.

    Thanks for the tip on the motherboard, will look at getting one as well.
     
  6. Sarsippius

    Sarsippius Member

    Joined:
    Apr 14, 2003
    Messages:
    846
    Location:
    Darwin
    I use ZOL on my NAS, mine is similar to your NAS in a number of ways but is dialled back quite a lot from your full on, cover all bases approach :)

    I think for a home NAS the ram requirements for ZFS are far removed from what is required for a full blown server. Mine is fine with a single 8GB stick & I'd be surprised if it even uses half of that.

    Where you talk about your backup drive, assuming you wanted to use zfs snapshots couldn't you simply make the video folder a second dataset and snapshot the main dataset?
     
  7. someon3

    someon3 Member

    Joined:
    May 30, 2004
    Messages:
    137
    Great setup and I like the way you plan your data storage from clean power to battery back up then to ecc ram and server class board and cpu. Then redundant parity in a zpool all in a low power platform. I'd love to have one like it but probably dont really need one right now.

    The only thing you will really keep an eye out fo. Is drive Temperature log across a month or so. Under busy condition and in Australia summer, sandwiching drive back to back like thaT with no air gap in betweeen is really asking for trouble. I've seen toasted drive on a few ocassion.

    Good luck with the build.
     
  8. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    627
    Location:
    Perth, Western Australia
    Thanks for the suggestion :) I probably wasn't clear enough with my original statement though - I don't want rollback, I want a complete backup. Snapshots are reliant on the original dataset existing in a healthy state. What I want is a complete, independent backup, which snapshots only give you if you're also doing replication to a second storage box :)
     
  9. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    627
    Location:
    Perth, Western Australia
    Thanks very much :) Yes, that middle 80mm fan is something I'm looking at. I'm going to get temperature monitoring and logging working as a priority, then throw some sustained activity at the array and see how it holds up. Also looking at replacing the rear fan shroud to adapt it to 120mm. I might need to take some tools to the internal rear plate of the hotswap chassis in the process, too - the ventilation holes are really very restrictive.
     
  10. Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    I really like your build and will probably get the same parts. Save me the trouble of researching.

    Which model is the RAM? Crucial 8GB DDR3 ECC 1600MHz 1.35v RAM? I cannot find these online
     
  11. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    627
    Location:
    Perth, Western Australia
    Thanks :) Yeah someone on the FreeNAS forums said the same yesterday - I'm flattered there has been so much interest.

    That RAM is CT102472BD160B. EYO have it for a reasonable price, but they're out of stock.
     
  12. ae00711

    ae00711 Member

    Joined:
    Apr 9, 2013
    Messages:
    1,317
    beautiful build :thumbup:

    but don't delude yourself - there is no way I read that fudging wall of text! :p:lol:
     
  13. HAVIC

    HAVIC Member

    Joined:
    Jun 16, 2002
    Messages:
    359
    Location:
    /dev/nul
    Nicely done! :thumbup:
     
  14. OCMunkee

    OCMunkee Member

    Joined:
    Jun 28, 2004
    Messages:
    1,340
    Location:
    Melbourne, VIC
    Killer build, mate! Unlike ae00711 I actually read the entire wall of text. It's something I've always had a keen interest in, and have been following the development of FreeNAS, ZFS and ReFS keenly.

    You now have me worried that my personal NAS/Server/Plex box could be backup up rotten data. However I feel (rather optimistically, perhaps) that ReFS will do the main bulk of self data healing itself.

    I opted myself to go for a 15W AMD machine with 16GB DDR3 (non-ECC), 6x4TB drives with 4x4TB mirrored for most usage, and 2x4TB as the "backup" drive. Each are "raid1 style" ReFS virtual drives.

    I'm curious if there's an easy comparison we can do as we've gone for similar purposes but (very) different budgets, and different use-cases.

    Love to chat about it at some point.

    --
    Nathan
     
  15. Skitza

    Skitza Member

    Joined:
    Jun 28, 2001
    Messages:
    3,746
    Location:
    In your street
    Nice build.:thumbup:

    When you say the economics don't add up.. you've actually spent slightly more on your system than an off the shelf pre-built one that fits your bill with lower power usage as well. I get that you may not like their OS's and that's your choice but what you've outlined in your requirements is already available.

    Which systems did you cross evaluate? Just curious.
     
    Last edited: Apr 21, 2015
  16. thormania

    thormania Member

    Joined:
    Jun 12, 2004
    Messages:
    839
    Location:
    Brisbane
    Great thread, I'm in the process of migrating from a ubuntu server setup as a HTPC, server, backup etc machine and it did everything badly.

    So i'm switching to a Chromecast (and later maybe a computer stick if I need more grunt) and turning the HTPC into a Freenas machine.

    My budget isn't where yours is at, however I'm totally nicking some of your ideas. Thanks for listing part numbers etc since i'm totally getting that case and drive mount.

    Freenas is terrific already. I've got many server functions, proper ACL's setup and all of the data shares working within about 30 mins - inc a time machine backup. Timemachine setup on ubuntu took me the best part of 2 hours when I did it in ubuntu following a guide, I did this using intuition. The GUI is great and the thought process seems similar to the Windows server admin duties i've done in the past, so it made sense to me.

    My setup is only 5 2tb drives in a Z1 raid. I only have about 4tb of data at the moment and it's backed up to an external 4tb drive. I still have a lot of room to grow and by the time I outgrow this i'll just build a new machine and copy the data across on whatever storage medium will be cost effective in a few years. Mind you i've had this running in some form or another for about 1.5 years now and i've still got years of space to go I guess.
     
  17. lench

    lench Member

    Joined:
    Dec 9, 2002
    Messages:
    2,136
    Location:
    Sydney, inner-outer- west
    a good example of buy once buy right :eek:
    any word on how much power this thing draws?
    whats the heat output like? if its anything like my old windows JBOD/RAID10 setup it'll be like a spaceheater in summer and as loud as a washing machine on spin cycle :weirdo:
     
  18. Claymen

    Claymen Member

    Joined:
    Jan 27, 2002
    Messages:
    98
    Location:
    Perth, WA
    I had considered building a dedicated NAS like this but the power consumption vs performance has always put me off. Would be interested to see what this setup draws when idle and also under load.

    I'm currently using an 8bay eSATA chassis filled with disk and a dual core +HT i7 laptop. The laptop pulls about 12-16w measured at the socket under high load. Currently loaded with 16gb of RAM it also runs as a Hyper-V host with 2x256GB SSD's inside. The biggest power draw for me is the drives which are all 2-3TB.

    Combined with FlexRAID I can take advantage of many of the power saving features that typically break block based RAID.
     
  19. Multiplexer

    Multiplexer Member

    Joined:
    Feb 26, 2002
    Messages:
    2,050
    Location:
    Home
    Tried both IJK and SkyComp and neither have ASRock C2750D4I in stock. Guess I will have to wait.
     
  20. OP
    OP
    SiliconAngel

    SiliconAngel Member

    Joined:
    Jun 27, 2001
    Messages:
    627
    Location:
    Perth, Western Australia
    No file system, whether ReFS, ZFS or BTRFS, can heal damage caused to files when they are written, because as far as the file system is concerned that is what you wanted to write. Scrubs against parity data protect against bad sectors on a single drive. The only way to protect against memory faults corrupting data on the fly as it is written to the drives is to use ECC memory. The only way you'll be able to tell if you have written bad data is if you read those files back after you've written them.

    Now, the likelihood of memory errors as a result of things like solar radiation is relatively low. But anything from faulty memory to power fluctuations or a faulty (or just not very good) PSU can cause memory corruption leading to silent data corruption. You could easily have a bad memory chip that isn't bad enough to affect system stability, but is bad enough to corrupt, say, 4% of data writes. How can you be sure this isn't happening? In a PC you are likely to notice eventually as stability will suffer, but in a NAS? That could go on for months to years before you figure it out.

    Hence my earlier statement - if you care about the reliability of your data, you need ECC RAM.
     

Share This Page

Advertisement: