Rackmount Home Server & VM Lab Build

Discussion in 'PC Build Logs' started by kesawi, Jul 15, 2014.

  1. kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    My aim was to migrate storage, download services (torrent, sabnzbd, sickbeard, etc), TV Server, Plex and other server duties from my current HTPC to my newly installed rack in the storeroom under my house. I also wanted eventually to setup a VM lab, Windows 2012 R2 server, and transfer the router function form my exiting wifi router to a VM on the server.

    [​IMG]

    Due to an unpublished change in the specifications for the rack, it ended up being only 540mm deep instead of 600mm deep as I had anticipated, restricting my options in terms of hardware it could accommodate.

    My priority was to expand my storage as my HTPC was nearing its capacity. I was after a minimum of 4 HDD bays to accommodate my future storage requirements. I considered building an all-in-one custom PC based on Server 2012 running FlexRAID in a VM, however upon further investigation I discovered it isn’t recommended to run FlexRAID in a VM, particularly if you don’t really know what you’re doing when it comes to virtualisation (ie me). I didn’t really like the other software storage solutions that were available for Server 2012 and didn’t want to invest in a RAID controller. I decided to separate the storage and server roles into separate machines.

    Storage
    For the storage I considered a custom build PC based on FlexRAID. Due to the short depth restrictions of my rack I was rather limited to a maximum case depth of 390mm. This left either a Norco RPC-230, RPC-231 or RPC-430 case as the three rackmount options available. I also investigated some SFF tower and desktop cases. When I priced up the build, the cost was getting pretty close to a pre-built NAS. Also the height of some of the cases started to become an issue as they took up too much of my rack space. Using a NAS offered less hassle in avoiding a troubleshooting a custom build and learning FlexRAID. In addition it also offered front mount hot-swap drive bays, which would have been absent on the other available case options. I therefore decided to go with a NAS. Unfortunately the shortest depth rackmount NAS I could find was still 430mm deep and couldn’t be accommodated by my rack.

    After some research I settled on a Synology DS414, however at the last minute I changed my mind and bought a Synology DS1513+. The Synology DS1513+ appealed to me for a number of reasons. First, the DS1513+ has four LAN ports compared to two on the DS414. This means that I can keep one LAN port as a management LAN and run a dedicated VLAN with Jumbo Packets enabled the three other ports over iSCSI MPIO back to my server, whereas on the DS414 I would only have one port for my iSCSI VLAN. Secondly, I was only initially purchasing 3 x 3TB HDDs to run in RAID 5, with the intention to add more drives later as my storage needs increase. With a 4-bay NAS I could only add a single 3TB drive, if I wanted to increase storage beyond that I would need to start swapping out the 3TB drives for larger drives. With a 5-bay NAS (and using Synology Hybrid RAID) I can add 2 larger capacity drives and gain the full capacity of these drives without swapping out any of the existing drives. My logic was that by the time I was looking to increase storage capacity, the cost of 4TB or 5TB drives will have dropped to where 3TB drives are at the moment and therefore I would be buying the larger capacity and want to use it.

    For the hard drives I settled on the Hitachi HGST 3.5” NAS 3TB 0S03662 drives. They were only $10/drive more than the WD RED 3TB drives, are faster and have a good reputation for reliability. Downside is they consume a little more power than the WD Reds and are a little noisier.

    Server

    I intended to run the NAS as an interim measure for a while until I was ready for a server, but found it had a couple of limitations. Setting up users, groups and file sharing was a little bit more painful than I was used to on my HTPC, and the share couldn’t be indexed by windows (I like the ability to search for things relatively quickly). Also, the SynoCommunity package repository was down for an unknown time and I couldn’t get the add-on packages I needed. I therefore decided to bring forward my server build

    I required a short depth case less than 390mm deep to fit in my rack. It also had to be able to accommodate an mATX board as I needed to install my existing TV tuner card and a multi-port LAN adapter. I looked at a 1RU HP Proliant DL320eGEN8v2 server which uMart had on special for $499. However, only the 4xsFF HDD version was available and I wanted to relocate some existing 3.5” drives into the server. I also wasn’t sure whether I could put two PCIe cards on the riser.

    I couldn’t find a pre-built server under 390mm deep so settled on a custom build a 2RU server.

    Case: Norco RPC-230 – This case is a 2RU high x 387mm deep rackmount case. It was the only one of three rack mount cases under 390mm deep that I could find available in Australia. The Norco RPC-430 is a 4RU case and would have occupied more of the rack. While having 6x3.5” + 2x 5.25” bays compared to 4x3.5” and 1x5.25” on the RPC-230, my primary storage was in the NAS so I didn’t need this extra capacity. The RPC-231 is almost identical to the RPC-230 except that it has and 2x3.5” and 2x5.25” bays. As I wasn’t installing a DVD drive in the server I preferred having more internal 3.5” drive bays.

    The RPC-230 is a nice solid rack mount case which has been able to accommodate the 2 x SSDs and 1 x 3.5” drive I’ve put in it. It can easily accommodate 2 x 3.5” drives in the two bays above the mother board tray, although they just clear the stock intel CPU cooler (good cable management is critical to avoid contact with the fan) . A third drive can be placed in the other middle bay, but in my case it slightly clashes with the USB3 header feeding my front USB3 port. I could get around this purchasing a low profile internal USB3 adapter cable. The fourth bay in front of the PSU can accommodate a 3.5” drive but the cables will be squashed up against the cables from the PSU.

    The individual drive trays are held by two screws and are quite easy to release. They only have holes for 3.5” drives, and then only for the bottom ones and not for the side mounts. There are no anti-vibration mounts, but the case has been solid enough that this hasn’t been an issue so far. I initially mounted my SSDs using just one of the bottom screw holes which was sufficient to hold it. I found that the drive screws supplied with my case were a slightly different thread to my SSDs. I’ve ended up changing over to a SSD adaptor bracket which can accommodate 2xSSDs in one 3.5” bay. The bracket doesn’t have any bottom screw holes and therefore is sitting loose in the drive tray, This is secure enough for me given that the drive doesn’t spin and the server won’t be getting moved around.

    In order to install the motherboard or change out the case fans (which you will want to do), the whole hard drive shelf has to be removed. This also involves removing the rack mount ears so that the screws for the shelf can be accessed. The two 80mm case fans are very noisy and only have molex connectors so can’t be connected to the motherboard fan headers without an adapter. I ended up replacing them two Noctua NF-R8 redux-1800 PWM fans which are much better.

    The coating to the outside of the case does chip and scratch easily, and you need to be careful with tightening the screws as they are quite soft. The screws are also quite small as well, so you need to be careful not to lose them. The case doesn’t come with a manual so you need to figure out how to take everything apart and where all of the screws go on your own. It’s pretty straight forward though.

    One thing to be aware of is that the case can only accommodate a PSU which has a front-to-back airflow path (ie air is draw from the front of the PSU and exhausted out the back). The vast majority of all PSUs now have a perpendicular airflow path (ie air is draw in from the top or bottom of the PSU and exhausted out of the back). The case has no air intake for the PSU on the bottom or top, as most server cases are designed for front to back airflow. This severely limits PSU choice.

    Rackmount rails are available for the case but at 20 and 26 inches in length, they are too long for my rack, and kind of defeat the purpose of a short depth case. It also only has USB2 ports on the front of the case and no USB3 ports.

    Motherboard: Intel Server Board S1200V3RPL – I required a mATX board to accommodate my TV tuner card and a multi-port LAN card. As the server was going to be located in a store room under my house I wanted something with remote management and KVM capabilities. I didn’t fancy having to carry a monitor and keyboard downstairs and work in the cold if I had to access the BIOS or boot options. I also wanted an onboard Intel NIC. This led me away from consumer boards to a server board. The S1200V3RPL was the cheapest socket 1150 mATX server board I could find and still has plenty of features. It has 4 x PCIe slots, 6xSATA3 ports, 2x 1GB LAN ports, 1 x USB 3 header, 1 x USB2 header, an internal USB2 Type A port which allows you to plug in a USB key or other device, 2 x external USB2 ports and 2 x external USB3 ports. It supports remote management and KVM through the onboard LAN, however I added the AXXRMM4 Remote Management Module which uses its own dedicated LAN port. The motherboard can accommodate up to 32GB of RAM. The BIOS has several fan speed control options, and controls the CPU and system fan headers to keep fan noise to a minimum.

    CPU: Intel Xeon® Processor E3-1240 v3 – As I was intending to run a VM lab, I wanted a CPU with virtualisation and hyper-threading support. I went for a Xenon over a Core i7 CPU, as the Xenon was cheaper. I chose the E-1240 V3 over the E1230 V3 as it was only $30 more.

    RAM: 2x Kingston 8GB PC3-12800 1600MHz ECC DDR3L RAM - 11-11-11 - Intel Validated ValueRAM – I was pretty much restricted to this RAM for compatibility with the motherboard if I wanted to use ECC RAM.

    PSU: Zippy 400W PS-5400HG2 – I was pretty much limited to this PSU as I required a front-to-back airflow path (ie air is draw from the front of the PSU and exhausted out the back) as the case had no PSU intake on the bottom. The only other PSU with this airflow configuration I could find in Australia was the Antec 350W Basiq ATX, however it didn’t have an 8-pin CPU motherboard power plug. The Zippy is a quiet and efficient CPU, and from what information I could find online, appears to be a reliable brand. Unfortunately it isn’t modular so the spare cables take up quite a bit of room and do restrict the airflow at the intake slightly. It comes with 5x SATA plugs, 6 x molex plugs and 2 x GPU plugs.

    LAN Card: Intel Ethernet Server Adapter I350-T4 – This is a great quad port 1GB LAN adapter. I wanted to run 2 teamed LAN ports for client access to the server, 2 LAN ports for iSCSI VLAN, and 1 WAN port from my cable modem. It was easily accommodated into case and comes with a low-profile adapter.

    Drive Bay: Themaltake Max 5 Duo SATA HDD Rack – One thing missing from the Norco case was a front USB3 port so that I could easily connect an external drive to the front of the server. I also wanted a hot-swap drive bay so that I could plug in a 3.5” or 2.5” drive into the server without needing to open the chassis. The Max 5 Duo combines both the drive bay and USB3 ports. I haven’t tested it so far, but it fits very nicely into the case. The red release tabs on the drive bay doors do stand out though and may not fit with you colour scheme.

    Case Fans: 2x Noctua NF-R8 redux-1800 PWM fans – The original case fans supplied with the Norco RPC-230 chassis were extremely loud and could not be connected to motherboard fan headers. I decided to replace them with the Noctuas. I considered the NF-R8 PWM fans but went with the redux version as they were cheaper and I didn’t need the any of the adaptors or accessories. In addition the redux fans are dark grey and blend in with the front of the case compared to the traditional Noctua beige and brown. The difference in noise is night and day, and the sever is now almost silent. The fans have allowed me to utilise the fan control options in the motherboard BIOS.

    SSD: 2 x OCZ Vertex 3 120GB SSDs – I had two of these in my main gaming rig in RAID0 and recently upgraded it to a single Samsung EVO 840. I’ve now repurposed the Vertex 3s to the server. I’m currently running them in RAID1 and use them for the host OS and guest VMs.

    HDD: Samsung 500GB – I had this drive left over from an external drive and have installed it as a scratch drive and backup drive for the VMs.

    HDD: WD Caviar Green 2TB WD20EZRX – I currently have this drive in my HTPC and will repurpose it into the server for use as a backup drive.

    The Build
    I wimped out on the build and had it done through TechBuy where I purchased the parts. While I have built many PCs in the past I was concerned that with the case being so tight everything might not fit together. There was no information on the maximum card length supported by the case. I was also concerned whether there would be enough room behind the PSU to accommodate the spare cables and the Thermaltake Max 5 Duo. I figured it was less risk if I let TechBuy build it so if there was a problem then they could simply cancel the order or make changes prior to dispatch. If I built it myself and found an issue during the build then I risked being stuck with parts I couldn’t use but had already purchased. Their build price was quite cheap at $55 so it was a no brainer.

    The service from TechBuy was great, from the initial enquiries through to the after-sales support. The rep I dealt with helped to confirm that the components should be compatible prior to ordering, and did his best to price match. While not matching pricing from other sources he was able to offer discounts of between $5-$10 off most items. The build was shipped within 7 days of placing an order, despite some parts having to be obtained from their distributors, and was received overnight. The server was double boxed and well packed. The overall build quality and cable management was very neat and of a high standard. A small folder was included in the shipment containing all of the spares, manuals and driver discs. The build team omitted a couple of case screws and the spare full-height bracket for the LAN card, however they were posted straight away at no cost, and without question when I contacted them.

    The Setup

    I’ve been gradually installing and configuring the host and guest operating systems over the past few weeks. I have the three drives in the NAS configured in Synology Hybrid RAID giving me 6TB of storage. I’ve created a single drive group and then a single storage volume over the drives. I’m using file based LUNs for the iSCSI Targets with thin-provisioning enabled. I have a main storage LUN, a backup LUN, a WSUS store LUN and a Hyper-V LUN. I’m using one of the LAN ports as a management port, and the remaining three are on my iSCSI VLAN.

    On the server I’m using Windows Server 2012 R2 as the main hyper-v host. I considered running just the core version but find the GUI easier to use during setup and configuration. I’ll probably convert it to core once everything is setup and maintenance is minimised. I have each of the LUNs on the NAS mounted by the host OS and directed to the relevant VM guest. I’ve teamed one LAN port from each of the onboard and I350-T4 NICs to be the main connection for the server. I have another three LAN ports connected to my iSCSI VLAN and have configure MPIO. The last LAN port I’m leaving spare for my firewall WAN port.

    My VMs are:
    • VM1 – Windows Server 2012 R2 – This is my active directory server. It also runs the DNS, DHCP and WINS servers
    • VM2 – Windows Server 2012 R2 – This is my main file server and shares the storage LUN from my NAS. It also runs my AD CS server. I intend to add the WSUS and Server Essentials roles to this VM, for backup to the NAS. I also intend to add a RADIUS server for use with my WiFi for WPA2 Enterprise Authentication.
    I’m finding that having the AD, DNS and DHCP server within a VM causes issues for me when I restart the host server. The network location awareness service on the host OS doesn’t detect the domain when the server boots and configures the teamed NIC as a public LAN. I’ve been able to change it to a Private LAN, but it still causes issues.

    I’m planning to set up several other VMs:
    • VM3 – Windows Server 2012R2 – I want to put my Argus TV server on a separate VM so that if there is a problem I can restart the VM without impacting other services.
    • VM4 – CentOS 7 – This will be my download VM for torrents, sabnzbd, sickbeard, etc. I’m also planning on installing Plex on this VM as well.
    • VM5 – Sophos UTM Firewall Home Edition – This will be my router.

    I do have some pictures of the sever and rack with everything installed, and will upload them when I have an opportunity to grab them off my camera.

    Thanks for reading this far.
     
    Last edited: Jul 22, 2014
  2. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    Server prior to installing hard drives. As can be seen it is quite a tight build, particularly with the spare cables from the PSU as there isn't really anywhere else to put them.

    [​IMG]

    The server in its new home in the my rack. The LEDs on the front of the Norco case are quite bright.
    [​IMG]
     
  3. mad_mic3

    mad_mic3 Member

    Joined:
    Jan 18, 2009
    Messages:
    2,265
    Location:
    Nulkaba
    Nice write up, plenty of interesting stuff:thumbup:
     
  4. davros123

    davros123 Member

    Joined:
    Jun 18, 2008
    Messages:
    2,913
    good writeup and sounds like a well thought out journey!

    I suggest you keep an eye out for some more ram as i expect you'll find you'll need more than 16gb as you add vm's.

    I am in the process of moving my pfsense firewall/router out to another dedicated box as it is a bit annoying having it in an all in one then the all in one is down for maintenance.
     
  5. Rezin

    Rezin Member

    Joined:
    Oct 27, 2002
    Messages:
    9,488
    Read about CARP. :)
     
  6. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    Thanks for the feedback. I had always planned that I may need some more RAM, but decided to start with the minimum I thought I could get away with first and then add more later when required. I'm currently within my limits but don't have the Argus TV VM, Sophos UTM VM or RAMDrive running yet.

    You make a good point about having the firewall/router in the main box. It could be quite inconvenient if I need to take down the main box, and maintain internet access. All of this added complexity is great when it works, but when it doesn't, it's not as simple as just telling my partner over the phone to try power-cycling the router.
     
  7. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    Managed to get my Argus TV VM up and running. I had to install the Recorder component on the host Server 2012 OS as Argus needs direct access to the physical hardware, and installed the scheduler along with MySQL on VM2. It's maybe a fraction slower to start playing live TV and switching channels compared to the previous single-seat arrangement on my HTPC. Took a little bit of stuffing around to find the BDA drivers for Server 2012, but worked first time and had no issue installing the tuner card drivers. I was able to successfully import and export my previous recordings and schedules from my HTPC into the new server. UNC streaming works fine, but RTSP streaming doesn't work at all. Have installed a RAM disk on my host for the timeshift buffer, which doesn't thrash my hard disk as much.

    I've given up on running a CentOS VM under Hyper-V for my Plex Server and download applications. Plex was crashing quite a lot, I had difficulty getting my SAMBA shares accessible, and the Hyper-V integration services were always producing errors. I don't have the time or patience to learn Linux and troubleshoot it. I've changed this over to a Server 2012 VM and now Plex is running quite fast and stable. Have managed to get Plex running as a service so I don't need to leave a user logged in. I just need to do the same for SickRage, sabnzbd, etc., and have them running on my NAS for now.

    Have reformatted my HTPC and upgraded it to Windows 8.1. Now that it's only doing the job of media playback, it's so much more stable and snappier than before. Have had to resize my screen due to the overscan on the TV and discovered that the Metro Apps don't like this, and will only work in the stanard screen sizes. No real loss as I don't use any of them.

    Next step is to get WSUS up and running.
     
  8. Opticon

    Opticon Member

    Joined:
    Apr 12, 2009
    Messages:
    254
    Location:
    Perth, WA
    How does Plex go with transcoding videos as a VM? What resources do you have assigned to the VM?
     
  9. Ninja_Harbinger

    Ninja_Harbinger Member

    Joined:
    Jun 2, 2011
    Messages:
    1,032
    Location:
    A warp pipe near you
    There should be scaling options in the Nvidia software. Then you can run s standard resolution and it'll most likely look sharper.
     
  10. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    When running it in a CentOS VM it seemed to just suck up as much RAM as I could throw at it, and was quite unstable. Running in a Server 2012 VM it's been quite stable. I've given it 8 virtual CPU cores, Boot RAM of 4GB and a dynamic RAM limit of 8GB. (I added another 16GB of RAM to the server over the weekend). All other RAM and CPU resource settings were defaults.

    Ran a quick test this morning with a 1080P stream to an iPod at the same time as a transcode for a sync on a 480i movie. VM was using around 3.2GB of RAM (with 3.6GB allocated) and overall CPU for the host was sitting at around 40%, with the task manager in the VM showing around the same for the VM. I'm not sure how this compares to running the same process on the host. There was a 50% improvement in the transcoding time for the sync compared to in a CentOS VM (ie twice as fast). The CPU usage did peak at 95% overnight when Plex was generating the media index files, but it was only during a brief period when some other maintenance activities were scheduled to run. Playback was fine with no skipping or stuttering.

    I've used the scaling option in the nVidia drivers. What the nVidia drivers appear to do is create a new custom screen resolution rather than using the standard screen size and putting black borders around it.
     
  11. evilasdeath

    evilasdeath Member

    Joined:
    Jul 24, 2004
    Messages:
    4,766
    Hey thanks for linking me this thread in my thread kesawi,

    I have been trying to work this out, is the RMM module required for IPMI or is there some functionality without it? It seems to differ between vendors what is provided by default. And they don't go into much detail about it.
    I honestly don't care about if it uses up one of the existing 2 gbe ports or goes through the active one.
     
  12. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    No worries. I think the board has intergrated BMC but requires at least the RMM4 LITE module (approx $50) to activate KVM and media redirection. It uses one of the integrated NICs to communicate (see section 7.1 of the detailed tech specs http://download.intel.com/support/motherboards/server/sb/g84364004_s1200v3rp_tps_r1_3.pdf). The BIOS mentions being able to configure BMC without the module (see page numer 120 of http://download.intel.com/support/motherboards/server/sb/g87275003_r1000rp_sg_r1_4.pdf) and this lets you start up/shutdown the machine and check a number of the operating stats. Section 2.2 of the RMM4 User Guide (http://download.intel.com/support/motherboards/server/sb/intel_rmm4_ibwc_userguide_r2_72.pdf) confirms the additional functions available.

    I couldn't live without the RMM4 module as I don'h have a USB crashcart for my laptop (which are quite expensive) or spare monitor, mouse and keyboard to leave downstairs.
     
  13. BluBoy

    BluBoy Member

    Joined:
    Jan 20, 2006
    Messages:
    1,899
    Location:
    Melbourne
    Hey mate,

    Thank you for all the info, it looks like a great little build!
    I'd like to use the Norco 230 as a 2RU NAS unit... Do you think it could fit more than 4 x 3.5" HDDs? Ideally trying to find space for two more drives (perhaps in the 5.25" bay?

    Also, with the new fans, is it quiet enough to use the case in a living area?

    Cheers
     
  14. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    Fitting 4 x 3.5" drives in the case is possible but it will be tight. Looking at the photo of my case, the left two drive bays easily accommodate drives provided the CPU isn't too far forward. The second bay from the right conflicts with the USB3 header on my motherboard and I could fit a drive in this bay if I purchased a short flat ribbon extension cable for this port. The right most bay is pretty tight with the excess cable from the PSU. You probably could squeeze a drive in there, and it would be easier if the 5.25" bay was unoccupied. Unfortunately the only PSU that I could find in Australia with the front to rear air-flow was non-modular. I'm not sure if you can fit two 3.5" in a single 5.25" drive bay.

    The Noctua fans are great and I can hardly hear them. So far the temps haven't become hot enough for the motherboard bios to spin them up. The PSU is reasonably quiet as well, although I can't recall how loud it actually is. I did try to check but the fans in the NAS in my rack obscured any noise the server PSU was making.
     
  15. decryption

    decryption Member

    Joined:
    Jun 27, 2001
    Messages:
    2,807
    Location:
    Melbourne
    Thanks for this post kesawi! I"m looking to buy the same Norco case and just wondering how you went mounting it in your rack? Is just using the front screws enough or do you need the rails?
     
  16. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    It is reasonably heavy and I definitely thinks it needs something to support the rear. My rack came with a pair of full length L brackets which I've used to support the case. The Norco rails for the case are 22 inches long from memory and therefore won't fit in 600mm rack. I have also used these angle brackets to support the rear of my UPS http://www.bunnings.com.au/zenith-100mm-zinc-plated-angle-bracket_p2762268. They work quite well and are much cheaper than buying the equivalent rack accessory.
     
  17. decryption

    decryption Member

    Joined:
    Jun 27, 2001
    Messages:
    2,807
    Location:
    Melbourne
    Thanks mate - the Norco 22" rails won't fit into my rack either (550mm inside, 22" is just a few mm too big). Now to devise a way to secure the rear :)
     
  18. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    Went and took some photos of my rack to show you. I've actually used the Zenith 63mm Zinc Plated Brackets. To fit them in 1RU I had to cut the top half of the bracket off. For 2RU gear I haven't had to touch the brackets. The L rails came with the rack.

    [​IMG]

    [​IMG]
     
    Last edited: Mar 1, 2015
  19. Turbine

    Turbine Member

    Joined:
    Jan 10, 2004
    Messages:
    450
    Location:
    WA
    Just read through your setup, amazingly well thought out and executed!
     
  20. OP
    OP
    kesawi

    kesawi Member

    Joined:
    Jul 3, 2012
    Messages:
    1,629
    Location:
    Brisbane
    Thanks. I've never actually ended up running all of the VMs I expected to. Currently running:

    Main Server: Hyper-V Host and ArgusTV Recorder
    • VM1: Backup Domain Controller, DNS, DHCP & WINS
    • VM2: File Server, Domain CA, WSUS, ArgusTV Scheduler
    • VM2: Plex
    Intel NUC: Primary Domain Controller, DNS, DHCP & WINS

    NAS: iSCSI and download services (SABnzbD, etc)

    The firewall is still running through my original WiFi router. I'm looking to do a separate build with enough grunt for me to run all of my traffic through a VPN, but I'm running out of space, and 1RU cases and PSUs are so damn expensive. Bought a 1RU server real cheap but forgot to double check the specs and it's too long from my rack :upset:

    Current photo just before I lifted the shelf up to make room for a 1RU firewall above the main server. I really should have gone for a 18RU rack :lol:

    [​IMG]

    Surprisingly air temps inside the rack only tend to be 3 degrees above ambient in the room. I've run some flexible ducting from the exhaust fans to the outside of the room which has made a little difference.
     

Share This Page