OCAU VMware Virtualisation Group!

Discussion in 'Business & Enterprise Computing' started by NIP007, Apr 16, 2008.

  1. neotheo

    neotheo Member

    Joined:
    Jan 26, 2005
    Messages:
    278
    Location:
    ~
    After doing a week long (work forced) VMWare VCP course, my opinions of the product have changed.

    The management GUI, licences, extra features (patch management, etc) just over complicate something that is actually very simple. Am I the only one who thinks this?

    It does however win some points for the integrated PowerShell based CLI. PowerShell makes (unfortunate) Windows admins like me a little bit happier on the inside.
     
  2. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    I've had VMWare staff in my office saying exactly that, word for word. I've had them say flat out "this is the company motto".

    I'm not going to suddenly change my mind on it because you (someone who is not a VMWare employee or representative, AFAIK) are defending them. VMWare said it to me (and my employer), and they lied. And yes, I'm still pissed off about it. Call me strange, but I don't like being lied to. Boiling down all the other crap, that is the single biggest thing that is pissing me off about this whole ordeal.

    In short, no.
     
    Last edited: Jun 30, 2011
  3. Schnooper

    Schnooper Member

    Joined:
    Apr 20, 2007
    Messages:
    796
    I hate to tell you, that is not their company motto. Never has been for time that I have been doing virtualisation (since ESX 1.52). I am not going to say that will change in the future - but who knows.
    I dont expect you to change your mind for me, I wouldnt for you. :) And correct, I dont work for VMware or am a representative. Not calling you strange, I dont like be lied to either. I strongly disagree with some of you statements, and I dont like seeing a product that I think is great and works very well being shit canned because what effectively as sales guy lied about and possible misconfiguration (not saying the latter for you). That starts a bad wrap for what could be a very good tool for some people.
    Some sales guys will continue to talk crap/lie/cheat for a sale at any cost, of that I am certain.
     
  4. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    Keen to hear your results on the side by side tests.

    My objective analysis puts Xen at the front for performance, followed by KVM and VMWare a distant third.
     
  5. username_taken

    username_taken Member

    Joined:
    Oct 19, 2004
    Messages:
    1,352
    Location:
    Austin, TX
    I think part of the reason you get into arguments on here over it is because you tend to generalize the argument ( or at least that's how it reads ) to the point where it sounds like you're saying that Xen is a better choice than VMWare in all instances.

    Those of us who use VMWare successfully with all the enterprise features like distributed switch, HA, DRS, FT, etc for the roles that we've given it will always argue the fact, because quite frankly Xen isn't quite mature enough to do the above things easily and well.

    For instance (last time I tried) for workload balancing ( xen DRS ) you need an external windows server, and 2 x FC/iSCSI LUNs, one for heartbeat, one for metadata. for dswitch you need an external windows server. Whereas this stuff is all managed by your vcenter server which can happily be a VM on the cluster.

    Hell until xen 5.6 to do something as simple as expanding a FC LUN was a difficult procedure, with VMWare it has a plugin for my SAN and can do the entire operation from start to finish.

    oh and I'm talking about the paid for Citrix Xen, not the free on linux Xen. I'm not sure what state that's in, but probably somewhat further along than their enterprise offerings.
     
  6. one4spl

    one4spl Member

    Joined:
    Dec 9, 2005
    Messages:
    428
    Location:
    Jamboree Hts, Brisbane
    So, like any good nerd debate everyone talks like their product of choice (linux/unix iphone/android xen/vmware) should magiacally detroy all other commers and be the only valid choice for everyone.

    But that's clearly horse-exhaust.

    Profit comes in owning your corner. VMware own the SME, but they best look out for Microsoft. Xen own the hack-it-to-almost-be-part-of-my-app-megasacle crowd. KVM own the linux paravirt crowd. The dozens of other smaller players fill in lots of other gaps.

    For my mind I think http://code.google.com/p/ganeti/ has a good shot at becoming an excellent SME platform where you just by compute power modularly, not separate processor and storage. I'm probably wrong, however.

    They all have their place in the market, even if that place is a very tiny island of happy customers.
     
  7. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    I quite clearly stated from the beginning that VMWare is a poor choice for customers who want I/O performance.

    If you're reading that to mean *all instances*, then the fault is not mine. I've made it very clear what I'm arguing, and what my beef with the product (and more importantly, the vendor) is.

    FT, HA, DRS etc all are of no concern to me. I've got apps that do that themselves (and much better, I might add), and I don't require it in my hypervisor.

    It probably comes of no surprise that I subscribe to the UNIX design philosophy - do one thing, and do it well. I don't want my hypervisor to be a jack of all trades. I just want it to be a bloody good hypervisor. All the other features you mention there I've had working faultlessly for decades long before one company added it to one product. (Ironic too that you speak of "maturity" of the product, in that regard).

    one4spl hits the nail on the head - target market is a pretty crucial factor here. If you're an SME shop with feature-poor applications and systems, then VMWare is going to answer a lot of your questions. I've been working with Enterprise Linux for a long time now, and the features you mention above have all been a standard for me long before commodity x86 hypervisors existed. VMWare sure as hell didn't invent the concept of geo-scale clustering and internet-wide instant failover.

    So again, I make my issue very clear: I need a hypervisor that delivers performance as close as possible to bare metal. I've already go high availability, fault tolerance, clustered file systems and instant recovery sorted with a massive suite of tools that were in production back when VMWare was still writing slow desktop sandboxes for OS developers. The issue is that (in a scary parallel to Microsoft) VMWare have taken that old, slow, clunky desktop product and jammed it into the Enterprise wrapped up in layers of marketing. Sure it has a sexy interface and some one-click wizards to set up features real enterprise admins have been using for decades. But the downside is it's as slow as a cold turd running up hill. And that last bit means the types of businesses I work for can't and won't use it. (And again in parallel, why many of them don't use server-side Microsoft products as well).
     
    Last edited: Jul 4, 2011
  8. Schnooper

    Schnooper Member

    Joined:
    Apr 20, 2007
    Messages:
    796
    VMware is not a SME only product. Just because Elvis had a bad experience and couldn’t get the IOPs out of a VMware environment doesn’t mean that VMware is not suited to large enterprise. Hell I have done many large scale enterprise rollout for the big end of town with very little in the way of performance issues. I have had lots of feedback stating that the servers run better as virtual machines compared to bare metal servers that they came from.

    Just because Elvis application didn’t virtualise well, it doesn’t mean that others wont. I can think of a number of large corporates that run Exchange 2010 in a virtual environment on VMware products with 1000's of mailboxes, and it works fine. its about planning and understanding what the workload will/looks like. Elvis has stated himself that there was very little planning for their VMware implementation.

    I am sure that we all know of legacy applications that various clients need to day to day business. Large enterprise also has a lot of this stuff in there environment. Sometimes the best way to make this highly available is to use technologies such as VMware HA and FT, when the application cant support these technologies natively. And sometimes its not, I would much rather let the application do the high availability, but sometimes there isn’t an option within the app for that. So VMware handles this for its clients very well. Doesn’t mean that its not suited for enterprise clients, but goes to show that’s its very suitable for enterprise clients. But again, planning is the key.

    As I have said before, virtualisation isn’t about performance. It offers a lot more than just that.
     
  9. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    What do you consider "large scale"? (Number of hosts, number of guests?). I'm keen to know real numbers.

    Why do people use Exchange as a benchmark? I've run quite literally four thousand plus mailboxes off dual pentium 200 boxes back in the day (back when I was a junior, and still cared about email servers). Mail has to be one of the most low end, pedestrian systems around. I mean, yeah, Exchange is a bloated pig of a mail system, but it's not amazing to virtualise it (even at "thousands of users" scale). In fact, it's pretty mundane stuff (stuff I don't waste my senior guys time on, that's for sure).

    The same VMWare rollout I was talking about earlier ran Exchange 2010 with 2000 user mailboxes without a hiccup. But who gives a shit about that low end stuff? A real enterprise app (not common as mud email) killed it in seconds.

    I have to say, if people are going to use something as banal as Exchange as an example of anything exciting, large, or performance-hungry, it says a lot.

    The example I've been speaking about to date was running finance systems for 1.7 MILLION users. VMWare said flat out their stuff could handle it. It could not. When we called them out on it, they continued to say it could be done despite the evidence facing them. They lied to us. End of story.
     
    Last edited: Jul 4, 2011
  10. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    Spot on, bro. :)

    I got close to convincing the powers-that-were to go z/VM, but they got cold feet at the last minute. I think it had a lot more to do with their rather incestuous/cronyistic relationship with HP, and less to do with any of the other reasons they gave.

    In the end the app went back to bare metal HP-UX/IA64, which is where it has been for 10+ years now on PA-RISC (since moving off mainframe 10 years ago).
     
    Last edited: Jul 4, 2011
  11. ewok85

    ewok85 Member

    Joined:
    Jul 4, 2002
    Messages:
    8,105
    Location:
    Tokyo, Japan
    So I have an SQL VM on ESXi that runs like crap, so what should I be using? Xen? z/VM?
     
  12. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    z/VM would be slight overkill. :)

    First I'd look at the delta between bare metal and VMWare. Then after I'd ensured I'd followed all of the VMWare best practice guides on their KB, I'd put in a support ticket to VMWare to make sure you're doing everything right from their end (and if they tell you you're full of it, use the bare metal evidence to prove otherwise). If after that you still aren't getting any love, definitely try out Xen and see how it compares to both bare metal and VMWare.

    (I'm assuming you're paying for VMWare support here - if you're not and you're running free stuff, then you don't have much to lose by jumping ship).
     
  13. ewok85

    ewok85 Member

    Joined:
    Jul 4, 2002
    Messages:
    8,105
    Location:
    Tokyo, Japan
    :Pirate: Yarrr, when I actually have spare time to do this I'll give it a go. Would be hilarious if I can get better performance on older hardware...
     
  14. Schnooper

    Schnooper Member

    Joined:
    Apr 20, 2007
    Messages:
    796
    Number of hosts isn’t really a good way to measure the size of an environment. We pretty much worked on scale up and not scale out. Most hosts (customer dependant of course) would be 4 way Xeon's with 256GB or 512GB RAM. With cluster sizes being around 8 hosts. Some customers would have single clusters, some multiple (more than just two). I have also done designs on 2 way boxes with 96GB hosts, that started quite small, and the customer loved it so much, it ended up with around 50 hosts last time I looked, and is still growing. Of course these figures varied depending on the customer requirements. dVS/host profiles makes adding hosts a very simple task. You would be surprised how much stuff you use on a day to day basis running on VMware. I have to be careful here, I can go into too much detail based on NDAs, and keeping my job.

    We generally found the amount of RAM being the limitation, then what the SAN could handle with IOPs, latency would start to rise, and the controllers on the SAN would start to max out. I have also found that fibre SANs produce better results than IP based storage. We have never found the Hypervisor to be the bottle neck for storage IO.

    With Exchange, we would be at around the 8k to 10k users per host, spanning several hosts running within SLA and a good user experience. Depending on users and their patterns, it can be a very heavy application. There is a lot more in the product these days than when Pentium 200 where around.

    Just on SQL, depending on the code in the database and how lazy the programmer is, this can be very IO heavy. Make sure that the backend storage is up to the task.

    Also, interesting point about some vendors and their relationships. I hear your pain…….
     
  15. bsbozzy

    bsbozzy Member

    Joined:
    Nov 11, 2003
    Messages:
    3,925
    Location:
    Sydney
    What storage are you using?
     
  16. ewok85

    ewok85 Member

    Joined:
    Jul 4, 2002
    Messages:
    8,105
    Location:
    Tokyo, Japan
    Local storage - 6x 136GB 10k SAS in RAID5. Wasn't my doing :(

    Given a weekend I could probably move it off and reconfigure the storage to something like RAID10 across the drives, and drop in two spare 73GB drives I have and use them in RAID1 for ESXi?

    The entire thing, including Windows, the production and test DB's, and 2 days of backups is less than 80GB. Maybe I should buy 5x SAS 15K drives and run that in RAID10+hotspare, then use the other drives in my fake-SAN as other VM datastores?
     
    Last edited: Jul 4, 2011
  17. username_taken

    username_taken Member

    Joined:
    Oct 19, 2004
    Messages:
    1,352
    Location:
    Austin, TX
    gah, RAID5 for DB sucks balls, worse if you're under a VM hypervisor with your OS drive also on that some RAID5. I would throw a SD card in there to boot ESXi from and rebuild your raid to 10. if you had the drives to spare you could also throw in a RAID1 and host your OS drive on that to seperate your OS IO from your DB IO.

    what RAID card are you using? I assume you've got a decent vendor card with at least 256mb cache ? Got a SAN you can link over? I always cringe at virtualized servers without virtualized storage, especially for important stuff ( if its a dev box, whatever then maybe not such an issue ).
     
  18. username_taken

    username_taken Member

    Joined:
    Oct 19, 2004
    Messages:
    1,352
    Location:
    Austin, TX
    Benchmarks ... I know some vendors have issue with posting benchmarks.... so here's some benchmarks from a 'non Xen' virtualization platform...

    centos 5.5 64bit
    1024 Mb RAM
    2 CPU
    Single 50Gb Virtual HD (FC attached SAN) @ /dev/sda
    io scheduler set to noop
    virtualization vendor tools installed
    everything else as default

    benchmarks done quickly with hdparm, dd, and bonnie++.

    Code:
    # hdparm -tT /dev/sda
    
    /dev/sda:
     Timing cached reads:   30176 MB in  2.00 seconds = 15118.31 MB/sec
     Timing buffered disk reads:  2012 MB in  3.00 seconds = 670.63 MB/sec
    
    
    time sh -c "dd if=/dev/zero of=out bs=8k count=100000 && sync"
    100000+0 records in
    100000+0 records out
    819200000 bytes (819 MB) copied, 1.95472 seconds, 419 MB/s
    real    0m4.115s
    user    0m0.026s
    sys     0m1.217s
    
    
    
    # time sh -c "dd if=/dev/zero of=out bs=8k count=1000 && sync"
    1000+0 records in
    1000+0 records out
    8192000 bytes (8.2 MB) copied, 0.007879 seconds, 1.0 GB/s
    real    0m0.156s
    user    0m0.000s
    sys     0m0.101s
    
    
    # time sh -c "dd if=/dev/zero of=out bs=64k count=20000 && sync"
    20000+0 records in
    20000+0 records out
    1310720000 bytes (1.3 GB) copied, 4.42014 seconds, 297 MB/s
    real    0m6.514s
    user    0m0.015s
    sys     0m1.586s
    
    
    # time sh -c "dd if=/dev/zero of=out bs=1024k count=1300 && sync"
    1300+0 records in
    1300+0 records out
    1363148800 bytes (1.4 GB) copied, 4.4939 seconds, 303 MB/s
    real    0m6.857s
    user    0m0.002s
    sys     0m1.523s
    
    and then bonnie++

    Code:
    # bonnie++ -d /tmp/test -n 512
    Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    localhost.locald 2G 108717  99 246273  27 84193   7 64992  57 127806   4  7399   9
                        ------Sequential Create------ --------Random Create--------
                        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                    512 98438  86 689422 100 20952  15 84056  73 930325 100  8737   7
    localhost.localdomain,2G,108717,99,246273,27,84193,7,64992,57,127806,4,7398.6,9,512,98438,86,689422,100,20952,15,84056,73,930325,100,8737,7
    
    and bonnie html screenshot

    Click to view full size!
     
  19. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,112
    Location:
    Brisbane
    While I appreciate the effort, benchmarks of just one set of software doesn't show much. If you've got the time to do other products on identical hardware, it would be appreciated greatly.

    I have some benchmarks on my site, but they are quite old now (ESXi 3.51 versus Xen 2.3, from memory). I ran Windows Server 2003 R2 64bit and RHEL5 64bit with just some OpenSSL benchmarks (some memory I/O, but mostly CPU bound - zero disk). At the time, Xen was a minimum of 4%, and a maximum of 14% faster than VMWare ESXi as an average across all of OpenSSL's algroithms (interestingly enough there were one or two that VMWare won, but amusingly they were algorithms like "IDEA", which I've never seen in practical use). Both hypervisors have improved greatly since then, but the end result is much the same these days.

    Sadly I can't divulge the benchmarks of the particular app I was using that demonstrated the disk I/O bottlenecks for a stack of reasons.

    And speaking to anyone who wants to join in (I see username_taken has already done this); in the name of fairness, ensure you've got all of the optimised virtual hardware and up to date drivers for said virtualised hardware for all systems. VMWare struggles greatly with generic virtualised hardware and no "VMWare Tools" installed, and it wouldn't be fair to benchmark without these set up.

    Yup, this is common for VMWare. I've found the same too. It really doesn't like IP storage at all. NFS is slightly better than iSCSI within ESXi for some bizarre reason (4.0 and 4.1 fixed some iSCSI issues, but not nearly enough to make it worthwhile yet), but it's still all pants in ESXi if you're not on FC.

    Interestingly enough, KVM doesn't appear to mind. Using 10Gbit/s iSCSI gave me largely identical speeds to 8Gbit/s FC on that set up, with only a very slight (sub-10% at worst) latency jump for iSCSI (due more to the inherent design of iSCSI itself). Given the ever plummeting price of 10GbE kit, that's great news for me. And by the time FC goes 16Gbit/s, I'll hit 40GbE. :)
     
    Last edited: Jul 5, 2011
  20. Schnooper

    Schnooper Member

    Joined:
    Apr 20, 2007
    Messages:
    796
    Well actually, I that was in reply to a question that that Elvis asked and not you. But I appreciate you thinking its all about you. But just to make you feel special, it depends on the work load and the hosts. In particular to amount of memory is the limiting factor. I have seen everything from 3:1 thru to 40:1, (and sometimes higher) again depending on workload. There is no definite number on this, it varies greatly.

    I think that I have been qualifying my numbers, by giving as much details as i think is suitable for a public forum. IT is Australia is pretty small, and I don’t think its appropriate to spill all the beans here.

    As I said, there are a range of customers all with different needs, and there for designs. I did also so that the cluster size is generally around 8 hosts, with multiple clusters. If those hosts are quad proc Xeons with 512GB per hosts, its hardly a tiny environment as you state.

    My last post stated a lot of numbers, perhaps more that a lot of others making large claims. Maybe you want to have a look at what you have been claiming in the past week or so.
     

Share This Page

Advertisement: