My aim was to migrate storage, download services (torrent, sabnzbd, sickbeard, etc), TV Server, Plex and other server duties from my current HTPC to my newly installed rack in the storeroom under my house. I also wanted eventually to setup a VM lab, Windows 2012 R2 server, and transfer the router function form my exiting wifi router to a VM on the server. Due to an unpublished change in the specifications for the rack, it ended up being only 540mm deep instead of 600mm deep as I had anticipated, restricting my options in terms of hardware it could accommodate. My priority was to expand my storage as my HTPC was nearing its capacity. I was after a minimum of 4 HDD bays to accommodate my future storage requirements. I considered building an all-in-one custom PC based on Server 2012 running FlexRAID in a VM, however upon further investigation I discovered it isn’t recommended to run FlexRAID in a VM, particularly if you don’t really know what you’re doing when it comes to virtualisation (ie me). I didn’t really like the other software storage solutions that were available for Server 2012 and didn’t want to invest in a RAID controller. I decided to separate the storage and server roles into separate machines. Storage For the storage I considered a custom build PC based on FlexRAID. Due to the short depth restrictions of my rack I was rather limited to a maximum case depth of 390mm. This left either a Norco RPC-230, RPC-231 or RPC-430 case as the three rackmount options available. I also investigated some SFF tower and desktop cases. When I priced up the build, the cost was getting pretty close to a pre-built NAS. Also the height of some of the cases started to become an issue as they took up too much of my rack space. Using a NAS offered less hassle in avoiding a troubleshooting a custom build and learning FlexRAID. In addition it also offered front mount hot-swap drive bays, which would have been absent on the other available case options. I therefore decided to go with a NAS. Unfortunately the shortest depth rackmount NAS I could find was still 430mm deep and couldn’t be accommodated by my rack. After some research I settled on a Synology DS414, however at the last minute I changed my mind and bought a Synology DS1513+. The Synology DS1513+ appealed to me for a number of reasons. First, the DS1513+ has four LAN ports compared to two on the DS414. This means that I can keep one LAN port as a management LAN and run a dedicated VLAN with Jumbo Packets enabled the three other ports over iSCSI MPIO back to my server, whereas on the DS414 I would only have one port for my iSCSI VLAN. Secondly, I was only initially purchasing 3 x 3TB HDDs to run in RAID 5, with the intention to add more drives later as my storage needs increase. With a 4-bay NAS I could only add a single 3TB drive, if I wanted to increase storage beyond that I would need to start swapping out the 3TB drives for larger drives. With a 5-bay NAS (and using Synology Hybrid RAID) I can add 2 larger capacity drives and gain the full capacity of these drives without swapping out any of the existing drives. My logic was that by the time I was looking to increase storage capacity, the cost of 4TB or 5TB drives will have dropped to where 3TB drives are at the moment and therefore I would be buying the larger capacity and want to use it. For the hard drives I settled on the Hitachi HGST 3.5” NAS 3TB 0S03662 drives. They were only $10/drive more than the WD RED 3TB drives, are faster and have a good reputation for reliability. Downside is they consume a little more power than the WD Reds and are a little noisier. Server I intended to run the NAS as an interim measure for a while until I was ready for a server, but found it had a couple of limitations. Setting up users, groups and file sharing was a little bit more painful than I was used to on my HTPC, and the share couldn’t be indexed by windows (I like the ability to search for things relatively quickly). Also, the SynoCommunity package repository was down for an unknown time and I couldn’t get the add-on packages I needed. I therefore decided to bring forward my server build I required a short depth case less than 390mm deep to fit in my rack. It also had to be able to accommodate an mATX board as I needed to install my existing TV tuner card and a multi-port LAN adapter. I looked at a 1RU HP Proliant DL320eGEN8v2 server which uMart had on special for $499. However, only the 4xsFF HDD version was available and I wanted to relocate some existing 3.5” drives into the server. I also wasn’t sure whether I could put two PCIe cards on the riser. I couldn’t find a pre-built server under 390mm deep so settled on a custom build a 2RU server. Case: Norco RPC-230 – This case is a 2RU high x 387mm deep rackmount case. It was the only one of three rack mount cases under 390mm deep that I could find available in Australia. The Norco RPC-430 is a 4RU case and would have occupied more of the rack. While having 6x3.5” + 2x 5.25” bays compared to 4x3.5” and 1x5.25” on the RPC-230, my primary storage was in the NAS so I didn’t need this extra capacity. The RPC-231 is almost identical to the RPC-230 except that it has and 2x3.5” and 2x5.25” bays. As I wasn’t installing a DVD drive in the server I preferred having more internal 3.5” drive bays. The RPC-230 is a nice solid rack mount case which has been able to accommodate the 2 x SSDs and 1 x 3.5” drive I’ve put in it. It can easily accommodate 2 x 3.5” drives in the two bays above the mother board tray, although they just clear the stock intel CPU cooler (good cable management is critical to avoid contact with the fan) . A third drive can be placed in the other middle bay, but in my case it slightly clashes with the USB3 header feeding my front USB3 port. I could get around this purchasing a low profile internal USB3 adapter cable. The fourth bay in front of the PSU can accommodate a 3.5” drive but the cables will be squashed up against the cables from the PSU. The individual drive trays are held by two screws and are quite easy to release. They only have holes for 3.5” drives, and then only for the bottom ones and not for the side mounts. There are no anti-vibration mounts, but the case has been solid enough that this hasn’t been an issue so far. I initially mounted my SSDs using just one of the bottom screw holes which was sufficient to hold it. I found that the drive screws supplied with my case were a slightly different thread to my SSDs. I’ve ended up changing over to a SSD adaptor bracket which can accommodate 2xSSDs in one 3.5” bay. The bracket doesn’t have any bottom screw holes and therefore is sitting loose in the drive tray, This is secure enough for me given that the drive doesn’t spin and the server won’t be getting moved around. In order to install the motherboard or change out the case fans (which you will want to do), the whole hard drive shelf has to be removed. This also involves removing the rack mount ears so that the screws for the shelf can be accessed. The two 80mm case fans are very noisy and only have molex connectors so can’t be connected to the motherboard fan headers without an adapter. I ended up replacing them two Noctua NF-R8 redux-1800 PWM fans which are much better. The coating to the outside of the case does chip and scratch easily, and you need to be careful with tightening the screws as they are quite soft. The screws are also quite small as well, so you need to be careful not to lose them. The case doesn’t come with a manual so you need to figure out how to take everything apart and where all of the screws go on your own. It’s pretty straight forward though. One thing to be aware of is that the case can only accommodate a PSU which has a front-to-back airflow path (ie air is draw from the front of the PSU and exhausted out the back). The vast majority of all PSUs now have a perpendicular airflow path (ie air is draw in from the top or bottom of the PSU and exhausted out of the back). The case has no air intake for the PSU on the bottom or top, as most server cases are designed for front to back airflow. This severely limits PSU choice. Rackmount rails are available for the case but at 20 and 26 inches in length, they are too long for my rack, and kind of defeat the purpose of a short depth case. It also only has USB2 ports on the front of the case and no USB3 ports. Motherboard: Intel Server Board S1200V3RPL – I required a mATX board to accommodate my TV tuner card and a multi-port LAN card. As the server was going to be located in a store room under my house I wanted something with remote management and KVM capabilities. I didn’t fancy having to carry a monitor and keyboard downstairs and work in the cold if I had to access the BIOS or boot options. I also wanted an onboard Intel NIC. This led me away from consumer boards to a server board. The S1200V3RPL was the cheapest socket 1150 mATX server board I could find and still has plenty of features. It has 4 x PCIe slots, 6xSATA3 ports, 2x 1GB LAN ports, 1 x USB 3 header, 1 x USB2 header, an internal USB2 Type A port which allows you to plug in a USB key or other device, 2 x external USB2 ports and 2 x external USB3 ports. It supports remote management and KVM through the onboard LAN, however I added the AXXRMM4 Remote Management Module which uses its own dedicated LAN port. The motherboard can accommodate up to 32GB of RAM. The BIOS has several fan speed control options, and controls the CPU and system fan headers to keep fan noise to a minimum. CPU: Intel Xeon® Processor E3-1240 v3 – As I was intending to run a VM lab, I wanted a CPU with virtualisation and hyper-threading support. I went for a Xenon over a Core i7 CPU, as the Xenon was cheaper. I chose the E-1240 V3 over the E1230 V3 as it was only $30 more. RAM: 2x Kingston 8GB PC3-12800 1600MHz ECC DDR3L RAM - 11-11-11 - Intel Validated ValueRAM – I was pretty much restricted to this RAM for compatibility with the motherboard if I wanted to use ECC RAM. PSU: Zippy 400W PS-5400HG2 – I was pretty much limited to this PSU as I required a front-to-back airflow path (ie air is draw from the front of the PSU and exhausted out the back) as the case had no PSU intake on the bottom. The only other PSU with this airflow configuration I could find in Australia was the Antec 350W Basiq ATX, however it didn’t have an 8-pin CPU motherboard power plug. The Zippy is a quiet and efficient CPU, and from what information I could find online, appears to be a reliable brand. Unfortunately it isn’t modular so the spare cables take up quite a bit of room and do restrict the airflow at the intake slightly. It comes with 5x SATA plugs, 6 x molex plugs and 2 x GPU plugs. LAN Card: Intel Ethernet Server Adapter I350-T4 – This is a great quad port 1GB LAN adapter. I wanted to run 2 teamed LAN ports for client access to the server, 2 LAN ports for iSCSI VLAN, and 1 WAN port from my cable modem. It was easily accommodated into case and comes with a low-profile adapter. Drive Bay: Themaltake Max 5 Duo SATA HDD Rack – One thing missing from the Norco case was a front USB3 port so that I could easily connect an external drive to the front of the server. I also wanted a hot-swap drive bay so that I could plug in a 3.5” or 2.5” drive into the server without needing to open the chassis. The Max 5 Duo combines both the drive bay and USB3 ports. I haven’t tested it so far, but it fits very nicely into the case. The red release tabs on the drive bay doors do stand out though and may not fit with you colour scheme. Case Fans: 2x Noctua NF-R8 redux-1800 PWM fans – The original case fans supplied with the Norco RPC-230 chassis were extremely loud and could not be connected to motherboard fan headers. I decided to replace them with the Noctuas. I considered the NF-R8 PWM fans but went with the redux version as they were cheaper and I didn’t need the any of the adaptors or accessories. In addition the redux fans are dark grey and blend in with the front of the case compared to the traditional Noctua beige and brown. The difference in noise is night and day, and the sever is now almost silent. The fans have allowed me to utilise the fan control options in the motherboard BIOS. SSD: 2 x OCZ Vertex 3 120GB SSDs – I had two of these in my main gaming rig in RAID0 and recently upgraded it to a single Samsung EVO 840. I’ve now repurposed the Vertex 3s to the server. I’m currently running them in RAID1 and use them for the host OS and guest VMs. HDD: Samsung 500GB – I had this drive left over from an external drive and have installed it as a scratch drive and backup drive for the VMs. HDD: WD Caviar Green 2TB WD20EZRX – I currently have this drive in my HTPC and will repurpose it into the server for use as a backup drive. The Build I wimped out on the build and had it done through TechBuy where I purchased the parts. While I have built many PCs in the past I was concerned that with the case being so tight everything might not fit together. There was no information on the maximum card length supported by the case. I was also concerned whether there would be enough room behind the PSU to accommodate the spare cables and the Thermaltake Max 5 Duo. I figured it was less risk if I let TechBuy build it so if there was a problem then they could simply cancel the order or make changes prior to dispatch. If I built it myself and found an issue during the build then I risked being stuck with parts I couldn’t use but had already purchased. Their build price was quite cheap at $55 so it was a no brainer. The service from TechBuy was great, from the initial enquiries through to the after-sales support. The rep I dealt with helped to confirm that the components should be compatible prior to ordering, and did his best to price match. While not matching pricing from other sources he was able to offer discounts of between $5-$10 off most items. The build was shipped within 7 days of placing an order, despite some parts having to be obtained from their distributors, and was received overnight. The server was double boxed and well packed. The overall build quality and cable management was very neat and of a high standard. A small folder was included in the shipment containing all of the spares, manuals and driver discs. The build team omitted a couple of case screws and the spare full-height bracket for the LAN card, however they were posted straight away at no cost, and without question when I contacted them. The Setup I’ve been gradually installing and configuring the host and guest operating systems over the past few weeks. I have the three drives in the NAS configured in Synology Hybrid RAID giving me 6TB of storage. I’ve created a single drive group and then a single storage volume over the drives. I’m using file based LUNs for the iSCSI Targets with thin-provisioning enabled. I have a main storage LUN, a backup LUN, a WSUS store LUN and a Hyper-V LUN. I’m using one of the LAN ports as a management port, and the remaining three are on my iSCSI VLAN. On the server I’m using Windows Server 2012 R2 as the main hyper-v host. I considered running just the core version but find the GUI easier to use during setup and configuration. I’ll probably convert it to core once everything is setup and maintenance is minimised. I have each of the LUNs on the NAS mounted by the host OS and directed to the relevant VM guest. I’ve teamed one LAN port from each of the onboard and I350-T4 NICs to be the main connection for the server. I have another three LAN ports connected to my iSCSI VLAN and have configure MPIO. The last LAN port I’m leaving spare for my firewall WAN port. My VMs are: VM1 – Windows Server 2012 R2 – This is my active directory server. It also runs the DNS, DHCP and WINS servers VM2 – Windows Server 2012 R2 – This is my main file server and shares the storage LUN from my NAS. It also runs my AD CS server. I intend to add the WSUS and Server Essentials roles to this VM, for backup to the NAS. I also intend to add a RADIUS server for use with my WiFi for WPA2 Enterprise Authentication. I’m finding that having the AD, DNS and DHCP server within a VM causes issues for me when I restart the host server. The network location awareness service on the host OS doesn’t detect the domain when the server boots and configures the teamed NIC as a public LAN. I’ve been able to change it to a Private LAN, but it still causes issues. I’m planning to set up several other VMs: VM3 – Windows Server 2012R2 – I want to put my Argus TV server on a separate VM so that if there is a problem I can restart the VM without impacting other services. VM4 – CentOS 7 – This will be my download VM for torrents, sabnzbd, sickbeard, etc. I’m also planning on installing Plex on this VM as well. VM5 – Sophos UTM Firewall Home Edition – This will be my router. I do have some pictures of the sever and rack with everything installed, and will upload them when I have an opportunity to grab them off my camera. Thanks for reading this far.