OCAU VMware Virtualisation Group!

Discussion in 'Business & Enterprise Computing' started by NIP007, Apr 16, 2008.

  1. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    and that's what i do, the vmdk gets compacted on his way in, i can then mount the vmdk and it appears as a folder if i really want to but i don't.

    and the vmdk's fly in via the HBA :) about 10min per VM, 50VMs are done very very quickly pending on how loaded the consolidated backup machine is, it also pulls data from other sources around the place

    no best practice is to snapshot the VM and copy the vmdk and config files across, this way you can restore them very very quickly, i only do file level backup for the exchange db and a few sql dumps the rest is all done with consolidated backup
     
  2. FrankGrimes

    FrankGrimes Member

    Joined:
    Jun 27, 2001
    Messages:
    818
    Location:
    Sydney
    What software are you guys using to replicate your VMs to a DR site? For those not using SAN/LUN based replication.

    Have been reading about Vreplicator and Doubletake so far. Anything else I should look at? Ideally want to get continuous replication. Or overnight at worst. Link to DR site will probably be 10mb/10mb for around 8-10 VMs.

    Being able to do physical to virtual would be a bonus but not essential..

    Unix boxes will be done by RSYNC
     
    Last edited: May 21, 2008
  3. yanman

    yanman Member

    Joined:
    Jun 4, 2002
    Messages:
    6,587
    Location:
    Hobart
    Any of you guys booting ESX from SAN? or booting ESX 3i from flash media?
     
  4. OP
    OP
    NIP007

    NIP007 Member

    Joined:
    Aug 27, 2001
    Messages:
    1,690
    Location:
    Sydney
    It depends on what type of SAN you're using I guess. We're going to set up SAN to SAN replication on our NetApp filers, so we'll be using NetApp's replication technology for this. There are quite a few options though, depending on what hardware you have. You can do SAN to NAS replication quite easily/cost effectively as well.

    Our servers all have 2 x 72gb 15k SAS drives in them with ESX server installed, the VMs boot up from the SAN. I believe a lot of hardware vendors will be releasing servers with inbuilt flash drives with ESX server already installed... not far off now. Haven't experimented with just using a USB flash drive to boot off though.... would save some $$$.
     
  5. yanman

    yanman Member

    Joined:
    Jun 4, 2002
    Messages:
    6,587
    Location:
    Hobart
    We're about to place our order for new SANs and new blade server chassis and servers. The server admins are looking at 3i and 3.5 but as far as I can see they'll need to use 3.5 for boot from SAN (the blade servers have QLogic HBA's). They were talking about booting 3i from usb drive but I think we lose some features by going 3i instead of 3.5?

    Once our NDA lifts I'll discuss the details of the new toys :p
     
  6. mr.ilford

    mr.ilford Member

    Joined:
    Dec 26, 2007
    Messages:
    101
    Location:
    At work
    You don't lose features between 3i and 3.5, in a certain sense.

    The only thing you lose is the service console... We can't move to 3i yet, as a lot of our legacy scripts and checks still run out of the service console, but apart from that, it's identical.
     
  7. JohnnyDrama

    JohnnyDrama Member

    Joined:
    Feb 2, 2007
    Messages:
    111
    Location:
    Adelaide
    I've used:
    vReplicator - quite good and offers a very quick turnaround in a failover setting, by creating a seperate VC datacenter and exact replicas of every VM, so all you have to do is power them on. However the way it does it's partials leaves a bit to be desired (full disk scan then copy changed blocks) so you can't realistically shrink the replication passes too close together without affecting performance of your VM / Host - I think that's getting resolved in a newer release. With Nightly scans works fine if you stagger them before / after the nightly backups.

    vRanger (not really replication but works to a point) - can do a raw backup to a remote host (i.e. single host at DR with internal or better storage, and backup the entire VM - would require the VM to be re-registered before it can be powered on at remote location). Would probably be too slow in the raw backup mode over a 10Mb Link for nightly replications

    Both products would struggle to get anything better than nightly replications IMO without impacting performance during the day - having said that, in a small environment it might not be unbearable. This would be true for any of this type of software that uses VM snapshots.

    Just keep in mind that these types of products - while extremely good value for money - are far from true SAN replication. They do a good job for what they cost, but they are not set-and-forget. They'll normally require close monitoring and a bit of trouble shooting everynow an then.
     
  8. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    I do both, no issues appart from remembering to mask the boot LUN ID to 0, most if not all HBA's will only boot of ID 0
     
  9. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    last time i tried HP blades they the encosures had serious problems with bandwidth! like only around 10GbE per enclosure! that sucked and we send them back and got DL360's, no more problems

    for one i don't like blades but could be just me, to me they are limited and the only reason they still sell is because people got brainwashed that you need blades for virtualization when to think about it blades were introduced when virtualization wasn't really up there and only to save space. Like how many mezzanine cards can you put in one? i use 2x HBA's per ESX host as i have seen HBA's fail
     
  10. Ninsei

    Ninsei Member

    Joined:
    Sep 3, 2001
    Messages:
    1,564
    Location:
    Melbourne
    Hey folks, probably an easy one.

    I have an HP DL385G2 (with the P400 SAS controller) at home I'm using for ESX (either 3.5 or ESX3i depending what license I can legitimately get). I'm probably going to end up hosting this box in a DC somewhere so I want to spec it up as far as possible before then.

    I've currently got 4x72GB SAS drives, but I want to add some extra storage. I don't really need the speed of 10k or 15k SAS drives for these secondary drives, so I'm considering getting some 2.5" SATA drives. The controller supports them OK, and the P400 controller is specifically listed on VMWare's list of controllers they will support running SATA drives. What I'm not 100% on is whether running both SAS and SATA on the one controller at the same time is supported. They will obviously be in seperate RAID groups.


    My dastardly plan is to buy the cheapest drives possible (or even just caddies from fleabay) and fill them with some 200-250GB 2.5" SATA drives. That means ideally I'd end up with two RAID 1 mirrors for the "important" VMs and a single large RAID 5 for ISO storage, backups and testing VMs.
     
  11. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    we use dl380G5's and it's possible, from memory i tried it on the 2nd port with SATA drives

    sad how all the tier1 don't put lot's of slots in their servers to talke 3.5 sata drives ;) methinks it's a conspiracy to buy their SAN's lol

    There is a server from HP with an "85 number (read opteron)" that will take 10 sata drives in 2U but good luck ordering it.....
     
  12. FrankGrimes

    FrankGrimes Member

    Joined:
    Jun 27, 2001
    Messages:
    818
    Location:
    Sydney
    Thanks for the information - I will speak to EMC and see what options they have.
     
  13. joe_sixpack

    joe_sixpack Member

    Joined:
    Jan 21, 2002
    Messages:
    2,850
    Location:
    Brisbane
    Just be aware that if you do add external storage and want to put it in a DC, it will need to be rack mountable and you'll pay good money for the extra RU each month.
     
  14. yanman

    yanman Member

    Joined:
    Jun 4, 2002
    Messages:
    6,587
    Location:
    Hobart
    I don't know about previous versions but the C7000 that we've specced has huge bandwidth into the aggregation points and from there to the blades themselves. Each BL460 has 3 slots, we've configured them with additional dual-port GigE, and a dual-port 4Gbps FC. You can put upto 8 IO modules in (that need to match the mezzanine cards obvioulsy) however one option is a Cisco ethernet module that has stackwise (32Gbps) ports that allow it to stack to some 3750's for instance. There's also the virtual connect options or even Infiniband QDR if you're into HPC

    I've never seen a HBA fail personally, but even in our case it's not an issue thanks to VMware and HA,DRS and Site Recovery Manager. Using the virtualconnect modules also means the blade server is basically a completely anonymous processing unit - the MAC addresses and WWN's can just belong to the slot

    The space savings are important for sure, maybe not so much for some people with loads of rack realestate, but the other practicalities that don't seem so important to begin with really do make a difference - cable management, equipment failures and maintenance, energy efficiency all benefit from blades.
     
  15. Ninsei

    Ninsei Member

    Joined:
    Sep 3, 2001
    Messages:
    1,564
    Location:
    Melbourne
    Yeah the DL385G2 is Opteron, 2RU, 8 slot 2.5". Kinda restrictive (from a cost point of view) when you're trying to set up a kick-arse lab for home/testing and don't want to spend a packet!

    SANs sell themselves anyway when you want to do a "big" virtualisation setup, but for stuff like this where DAS is fine, HP's drive pricing does suck the big one.

    Very good point, but no external storage plans mate. The main reason for looking at DC hosting is just to get it out of the house. I don't plan to set up a rack full of gear at this stage (I look at virtualisation as basically my virtual rack), so paying someone to feed and water the box (UPS, redundant links etc) is a no-brainer. As above, the box can take 8 drives, so even if I only used 72GB drives that'd be heaps for what I'm planning! :) The idea is 4x72GB SAS drives and something like 4x200GB SATA drives, all internal, just to really overkill it.
     
  16. JohnnyDrama

    JohnnyDrama Member

    Joined:
    Feb 2, 2007
    Messages:
    111
    Location:
    Adelaide
    I agree with Yanman - the C series HP blades are great kit - I spoke to a HP sales engineer about the bandwidth issue once as a competitor had raised it as a concern. The actual throughput was a number so high i immediately dismissed it as a concern (and promptly forgot the number also - convenient I realise)

    I had good results with ESX on BL685C blades with 6 Nics, and 2 HBAs with 32 GB RAM (that was a solid year and a bit ago). During some testing was able to put 60 - 70 VMs (mix of prod and test) with no problems at all - though I never would have loaded them that highly for std prod purposes.

    .. and I highly recommend standing behind one that has all the optioned fan kits when you power it on :D

    However - blades don't make a lot of economic sense if you're only needing 4-5 hosts.
     
  17. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    i used to work for HP when they paid better, i know what i'm talking about mate, what the guy did was ....nevermind good that you're happy!
     
  18. JohnnyDrama

    JohnnyDrama Member

    Joined:
    Feb 2, 2007
    Messages:
    111
    Location:
    Adelaide
  19. PsyKo-Billy

    PsyKo-Billy Member

    Joined:
    Jan 6, 2002
    Messages:
    2,712
    Location:
    Townsville
    VMware Overheads

    So how much overhead does VMware actually have?

    I went for a google and manged to dig up all kinds of forum posts and rubbish sprouting numbers anywhere from 3% (Naturally from VMWare themselves) to about 50% at some dodgy looking linux site.

    I figured I'd ask here. How much of a performance hit have you guys noticed? Specifically with VMware Server as apparently ESX is faster.

    Ta.
     
  20. stalin

    stalin (Taking a Break)

    Joined:
    Jun 26, 2001
    Messages:
    4,581
    Location:
    On the move
    There was a computerworld or CIO mag I read a few months ago.. performance of VM was 90% of physical, with most drops on disk and Network IO (as you would expect) but that was Vmware Server not ESX. I have yet to see a reliable benchmark source for ESX, Xen, XenSource etc.
     

Share This Page

Advertisement: