Hardware Virtulization for home server worth it?

Discussion in 'Business & Enterprise Computing' started by v81, Aug 31, 2007.

  1. v81

    v81 Member

    Joined:
    Jan 31, 2005
    Messages:
    642
    Location:
    SE Vic
    I'm putting together a bitbucket for home, and intend to run 1 virtual machine on a regular basis with other often as experiments on what will likely be a 64bit Server 2k3 host.

    Intel E2140 / Intel E4400 / Intel E6320
    2 gig DDR2
    Gigabyte GA-G33M-DS2R
    1 x 300 gig SATA OS / SWAP
    4 x 500 gig SATA RAID5 on ICH9 IntelMatrix
    Antec TruePower2 480w

    I'll likely use VMware at this point for familiarity's sake.

    My question is weather or not it is worth getting a CPU with hardware visualization, and how significant a role will cache play.

    The machine will be infrequently be serving a large volume data, downloading and folding whilst occasionally playing host to experiments in virtual machines.

    Is it worth the extra $70 for 2 meg cache over 1 meg?
    Is it worth an extra $140 for 4 meg cache and VT?

    Appreciate any knowledge and experience you people are willing to share.
     
  2. sonyx

    sonyx Member

    Joined:
    May 10, 2003
    Messages:
    1,232
    e2140 does not have virtualisation, nor does e4400


    VT doesent seem to make a difference in vmware in 32bit anyhoo
     
  3. Chemix

    Chemix Member

    Joined:
    Dec 5, 2002
    Messages:
    348
    Location:
    17/F/bandcamp
    I find the biggest issue with virtualisation is generally IO. It's fine if you have a SAN and it is correct set up. I've generally setup a lot of the ESX boxes with local storage as RAID 10.
     
  4. oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    In my opinion it is. You might not see yourself using VM's much right now, but down the track if you want to test another OS (Linux or Windows) having the VT features makes a big difference in my experience...

    Intel's lack of cheap CPUs with VT is what made me go down the AMD route when I built a new server for home, since I wanted a box I could run multiple VMs in on a 24/7 basis.
     
  5. wilsontc

    wilsontc Member

    Joined:
    Jan 1, 2004
    Messages:
    334
    Location:
    Melbourne
    Personally, I'd go for a low-end AMD setup, and a RAID 10 for the .vhd. Make sure you don't do mobo fake-RAID - if it goes haywire, you'll find it difficult to get your data back. I don't know if Windows can do software RAID 10, but Unix can :)
     
  6. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,003
    Location:
    Brisbane
    I've setup VI3 infrastructure from scratch and we are running it on dual xeon 3.6Ghz soon to be replaced wit blades and I can tell you this much, for home it don't matter much! what does matter is storage speed. At home as soon as you pass about 3-4 vm's on SATA it will bog down too much.

    Also memory....you need lots of it and pointless using more then 3GB without a "serverworks chipset" as it will get wasted unless you go to 6GB and are prepared to loose 1GB

    Get a few NIC's or dual port NIC's or a switch that supports VLAN, this way you can learn more.

    If you only do home use then anything goes really :)

    Remember virtualization is the next big thing along with voip, get the grips of it now and collect the cash later
     
  7. Chemix

    Chemix Member

    Joined:
    Dec 5, 2002
    Messages:
    348
    Location:
    17/F/bandcamp
    Yup IO is the probably the biggest thing that plagues virtualisation. One of the greatest thing about ESX is raw device mapping. Set it up with blades and a SAN and you generally don't have to worry about IO issues anymore.
     
  8. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,003
    Location:
    Brisbane
    i even san boot ESX and it's putting no stress on the SAN, the QLA4050c's are very nice but expensive for home :)
     
  9. OP
    OP
    v81

    v81 Member

    Joined:
    Jan 31, 2005
    Messages:
    642
    Location:
    SE Vic
    Thanks guys.

    The deal is done and i got some surprising results from the Intel RAID.

    [​IMG]
    Click to view full-sized image!
    Hosted by UGBox Image Store

    Sequential read looks great, seek isn't fantastic but not a bad result from desktop drives on a home network dealing mostly in large read opps (disk images and media).

    I ended up skipping out on the VT and sticking with the cheaper E2140.
    Very happy, yet to fine tune but we'll get there.
     
  10. sonyx

    sonyx Member

    Joined:
    May 10, 2003
    Messages:
    1,232
    read/write speed wont really make a difference on vmware if its higher than say.. 60mb/sec

    What will kill you is hosting 2 VMware images on that single raid array... it will have to seek between each image to write data, and at 13ms random seek... it's going to kill performance big time.


    2x RAID1 arrays, or hell... 4x separate disks with a smarty pants backup system will wipe the floor over the current setup
     
  11. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,003
    Location:
    Brisbane
    that depends on the number of spindles, 10x15krpm raid6 array won't break a sweat
     
  12. mintlin

    mintlin Member

    Joined:
    Jan 28, 2005
    Messages:
    76
    Location:
    Brisbane, QLD
    I'm using VMWare workstation on my laptop (with support for VT) for work and the guest operating system runs heaps smoother and faster than if I had the VT disabled (through BIOS).
    So, yup, if you can afford it got the the processor with VT support.
     
  13. OP
    OP
    v81

    v81 Member

    Joined:
    Jan 31, 2005
    Messages:
    642
    Location:
    SE Vic
    Thanks guys, but its already a done deal.

    The focus for this machine is on redundant storage for my home media.
    This machine will also store backups for my desktop, my partners notebook and short term for friends and clients.

    The virtualization is for a Linux webserver fulltime and an occasional second or third machine (random projects).

    I have a second partition on the primary OS drive where i intended to the first virtual machine, and during idle i expect that that disk wont be overly busy.
    Neither the host nor guest OS's will be in an enterprise environment, it's just a home server in my garage.

    I have definatly enjoyed the replies, all of which are informative.

    BTW, i've been playing with the onboard raid a bit more.
    Managed to get ~ 350MB/sec read on a 4 disk RAID 0 array.
    Not bad for onboard.
    Also found ICH9R is capable of online expansion and RAID leved migration.
    Does take near 20 hours to add a disk, but still handy none the less.
     
  14. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,525
    Location:
    Adelaide, SA
    well bugger, because based on experience with a few hundred of these
    you've got a 1 in 3 chance of RMA'ing the bastard within the year for either horrible cap squeal or outright death :(
    However, the odds of killing anything inside are thankfully quite low :rolleyes: . Now you've done it, I wouldn't sweat too much, just keep an eye one it and perhaps a spare Corsair HX in the cupboard for quick changeovers :thumbup:
     
  15. Payload

    Payload Member

    Joined:
    Aug 31, 2007
    Messages:
    341
    Location:
    Perth
    except that hardware based ie. moboraid is faster than sofware raid. once again you have to decide between speed and reliability. Its funny how you often have to make that choice with everything computing and motoring.
     
  16. oli

    oli Member

    Joined:
    Jun 29, 2002
    Messages:
    7,266
    Location:
    The Internet
    That is definitely not always the case, in particular if you're using software RAID under Linux. There are plenty of tests that have been done to show this.

    Also is really just software RAID, implemented in a driver. It's a lot closer to software RAID than it is to true hardware RAID (which costs a lot but has a dedicated processor to do the "hard work").
     
  17. Aetherone

    Aetherone Member

    Joined:
    Jan 15, 2002
    Messages:
    8,525
    Location:
    Adelaide, SA
    In 10 years I've never seen a desktop motherboard with proper hardware raid. Certainly a few $1k+ server boards with SCSI raid, and countless desktops with pretend (soft) raid implemented in various addin chips and chipsets but never desktop and true co-proc assisted hardware raid in the one bundle.
     
  18. Snoops

    Snoops Member

    Joined:
    Jan 17, 2004
    Messages:
    1,458
    Location:
    Brisbane
    My home vmware server runs 2 x raptors in raid0 for the base vhd's. A larger (raid/z) is mapped via nfs & smb. Though that is its own server.

    The raptors keep pace fine; though that said this is a home server, with only 3-4 'guests' running.
     
  19. OP
    OP
    v81

    v81 Member

    Joined:
    Jan 31, 2005
    Messages:
    642
    Location:
    SE Vic
    Hey Aetherone,

    I'm with you on that.
    This is an older TP2, from well before i was aware of their issues.
    It was recycled, I wouldn't be caught dead buying one now.
    Its a pitty Antec have slipped with their PSU's.
    I've RMA'd 3 out of 4 NeoHE550's i've had.

    The case, PSU, 120gig disk and ram were all recycled.
     
  20. koopz

    koopz Member

    Joined:
    Dec 27, 2001
    Messages:
    2,008
    Location:
    Qld
    it's a damn pitty Antec advertise 'a solid 3 yr warranty' on both the psu and pc case boxes yet no one here in Qld does more than a crappy 12mth warranty on current Antec stuff =(

    not even my fav Qld OCAU sponsor =(
     
    Last edited: Sep 8, 2007

Share This Page