OCAU VMware Virtualisation Group!

Discussion in 'Business & Enterprise Computing' started by NIP007, Apr 16, 2008.

  1. NIP007

    NIP007 Member

    Joined:
    Aug 27, 2001
    Messages:
    1,690
    Location:
    Sydney
    Hi guys,

    So many of us have, are in the process of, or are thinking of going down the VMware virtualisation path.. I thought it'd be a good idea to set up a group :thumbup: Here we can learn from each other, share ideas, and even post up our setups (for those who are game enough) :D I'm at the implementation stage of our VMware virtualisation project and it has been a lengthy process... researching all the products, software, hardware, getting quotes from different hardware and software vendors, organising training, speaking to VMware techs and consultants.. if we all share our experiences we can make the process a lot easier for others. Feel free to join, and good luck to everyone who is new to virtualisation. Just copy the signature and assign yourself a member number :)

    Note: This is more for the enterprise range of products such as ESX, but VMware Fusion etc users feel free to join :)

    I will be posting up the solution I'm implementing soon.
     
    Last edited: Apr 16, 2008
  2. Tony

    Tony New Member

    Joined:
    Jun 26, 2001
    Messages:
    9,987
    Location:
    Sydney, NSW, Australia
    good good

    interested in a comparison of the free versions

    i'm leaning more towards xen given it's polish

    also download virtual iron but it's like the black sheep no one knows about
     
  3. OP
    OP
    NIP007

    NIP007 Member

    Joined:
    Aug 27, 2001
    Messages:
    1,690
    Location:
    Sydney
    The Xen solution looks VERY interesting.. I might be going to the Citrix roadshow later on this year just to see what it has to offer.

    I'll be posting up my 'profile' in the near future which will include stuff like:

    - Number of users.
    - Number of offices and location.
    - Number of physical servers before virtualisation.
    - Major applications/systems.
    - Reason for virtualisation.
    - Outline of infrastructure.. i.e. servers, SAN, backup procedure etc.

    Of course some people will not feel comfortable disclosing this info, so that's understandable :thumbup:
     
  4. mr.ilford

    mr.ilford Member

    Joined:
    Dec 26, 2007
    Messages:
    101
    Location:
    At work
    I'll be in

    We run 6 clusters, largest is at 22 hosts (which we're about to split in half). Brisbane, Melbourne, Sydney, Hong Kong, Phoenix and installing a cluster somewhere in Sweden at the moment (remote hands).

    All using ESX 3.5.0.

    All IBM kit (LS41 and HS21 blades). IBM ES and DS storage. Backed up via Commvault (VCB or iDataAgent depending on the VM).

    Virtualising over the last two years has allowed us to clear out two datacenters in Brisbane, mostly old legacy systems. Consolidated down to one cluster with 8 hosts.

    It's saving us time, money, and power.

    Wow, I should work for Vmware marketing ;)
     
  5. OP
    OP
    NIP007

    NIP007 Member

    Joined:
    Aug 27, 2001
    Messages:
    1,690
    Location:
    Sydney
    - Number of users: 70
    - Number of offices and location: 4 across Australia, head office in Sydney
    - Number of physical servers before virtualisation: 12
    - Major applications/systems: Exchange, ERP system, SQL databases, web server which is hosted internally, Citrix servers
    - Reason for virtualisation: server consolidation, ease of management, quicker response times for setting up test environments etc, high availability, cost savings, plus it all ties into our disaster recovery plan
    - Outline of infrastructure: 3 x HP ProLiant DL380 G5's, dual quad core 3.16GHz processors, 32GB of memory and additional quad port gigabit nic on each, all HP ProCurve switches. Two of the servers will reside in the head office, the 3rd will be set up in our second largest site as a backup. The second site will be our DR site (we will have data synching over the WAN). NetApp FAS3040 SAN (iScsi setup, not FC), VMware ESX Server 3.5, Virtual Centre 2.5, backup solutions: a mixture of NetApp, Backup Exec and other third party backup solutions for snapshots/replication etc which we haven't decided on as yet.
     
    Last edited: Apr 22, 2008
  6. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,296
    Location:
    Canberra
    nice template:

    - Number of users:
    ~200 desktops & ~ 20 public websites
    - Number of offices and location:
    ~63 sites across australia, network hosting in Ultimo / Canberra
    - Number of physical servers before virtualisation:
    29
    - Major applications/systems:
    Exchange, Jboss app servers, Oracle 10G DB, Oracle BI, Windows AD, windows file server(now CIFS on our Netapp SAN), WSUS, sophos, mimesweeper pmm, squid proxy + test + dev environments for our app servers + DB.
    - Reason for virtualisation:
    consolidate, HA + DR, reduce long term operating costs associated with [power consumption, hardware maintenance, server rebuilding due to hardware EOL], power consumption was 7Kwh + 2 coolers 24x7
    - Outline of Server infrastructure:
    4 x Dell 6850's, each server has 2 x PCIe ISCSI HBA, running ESX 3.5
    - Outline of Network infrastructure:
    2 x 3750 in a switch stack, this feeds the netapp controllers in an etherchannel setup, Netapp SAN connects via 2 x 1gbit etherchannel links, this etherchannel has 2 vlans, one for ISCSI <-> ESX traffic operating on 9000 byte MTU, second vlan operates on standard 1500 byte MTU(to serve punters CIFS and the like)
    - Outline of SAN infrastructure:
    FAS 2050A with snapmirror, nearstore + A-SIS, allows very nice space consolidation from dedupe technology. 20 x 300 GB SAS disks on netapp1, 14 x 750 sata disks on netapp2, snapmiror allows us to create archive storage from SAS to SATA disks of virtual machines and other things.
     
  7. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    I'm in :)

    44 servers before, 4 servers now, ESX 3.5, Procurve switching, EMC san and iSCSI

    can't say much more
     
  8. FrankGrimes

    FrankGrimes Member

    Joined:
    Jun 27, 2001
    Messages:
    818
    Location:
    Sydney
    Cool idea.

    - Number of users: 130
    - Number of offices and location: 9 Across Australia and New Zealand
    - Number of physical servers before virtualisation: 10 (can't virtualise branch servers which sucks..)
    - Major applications/systems: Exchange 2003, SQL databases, Sharepoint, Citrix, Domain Controllers, Trend Micro Suite, Applix TM1, Systems Centre Config Mgr (soon) etc. ERP Still runs on AIX though. (Pronto)
    - Reason for virtualisation: Same as everyone else really.. DR, Less Physical servers, Easy to manage, redundancy, overall effort reduced, cost savings in the long run.
    - Outline of infrastructure: 2 x HP DL360R05 E5430 Quad Core, 16Gb Ram each + EMC AX4 ISCSI SAN with a mixture of SAS and SATA disks for different applications, Procurve switching, VMware ESX 3.5 etc

    I'll get some pics at some stage as well..

    Also, something I'd be interested in is Backup Solution!

    We are just using Backup Exec, but going to a Commvault Demo today.
     
    Last edited: Apr 17, 2008
  9. Kodaz

    Kodaz Member

    Joined:
    Apr 16, 2004
    Messages:
    938
    Location:
    Brisbane 4124
    Number of users:
    ~170
    Number of offices and location:
    See attached visio.
    Number of physical servers before virtualisation:
    Only 4, consolidation not main reason for virtualisation
    Major applications/systems:
    See attached visio.
    Reason for virtualisation:
    High Availability, Disater Recovery, quicker rollout of new services, expansion capabilites to suit ongoing aquistion (in 10 months we have doubled in size).
    Infrastructure:
    See attached visio.
    Future:
    Seoncd data center with a backup ESX host and VCC on their own iscsi SAN.

    [​IMG]
     
  10. ACA:Sleeper

    ACA:Sleeper Member

    Joined:
    May 15, 2006
    Messages:
    424
    Location:
    Melbourne's SE Suburbs
    I've just come back to sysadmin after a few years suffering with applications support, so I'm pretty rusty. In the past I've only used basic low end server hardware and some Dell NAS hardware. These were pretty straight forward DC's and file servers.

    This project is a merge of several schools, stats are of all campuses combined;

    - Number of users: 2300+ users, 800+ odd desktops and 250+ laptops
    - Number of offices and location: 2 School Campuses, soon to be joined by 10Gb Fiber
    - Number of physical servers before virtualisation: 20+ at a rough guess. I have not even physically found them all yet.
    - Major applications/systems: DCs, Massive amounts of data, SQL databases, Internal and external web pages, Several specialist database servers, Ghost, WUS, AV server, ISA
    - Reason for virtualisation: server consolidation, ease of management, quicker response times for setting up test environments etc, high availability, cost savings, plus it all ties into our disaster recovery plan (What NIP007 said) plus ease of migration at servers EOL
    - Outline of infrastructure: Blank Canvas, help me out fellas.

    They have HP Procurve switching in place, and use mostly HP server equipment, so at this stage HP is preferred as a solution. Looking at VM ESX and a SAN setup, posibbly replicated to a second set of the same hardware at the remote campus for fail over, I'll have to investigate if it would be capable of load balance too at a san and vm level.

    I'm on a tight time restriction, and a fairly limited budget, so it needs to be right the first time. My 3rd day on the job they had me meeting with architects. :rolleyes:

    I'm interested in any advice you can offer. Even a starting point for my research.
     
    Last edited: Apr 17, 2008
  11. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    time restriction and limited budget and you're new in the job = failure everytime!

    explain to them you need more time and go through the proper procedures, don't just yank things in and expect it to magically work :)
     
    Last edited: Apr 17, 2008
  12. PsyKo-Billy

    PsyKo-Billy Member

    Joined:
    Jan 6, 2002
    Messages:
    2,712
    Location:
    Townsville
    One topic I haven't seen much discussion on here is security specific to VM. Is anyone doing anything specific? Run the latest version, keep it patched and hope for the best? :)
     
  13. VR4hore

    VR4hore Member

    Joined:
    Sep 8, 2001
    Messages:
    260
    Location:
    Brisbane
    Well, I guess I'd make group member #3...

    My personal VMWare setup is a 2.4gig core2quad, 8 gig of ram, running server 2003 enterprise x64, vmware server 1.0.5, running linux router/xp remote access host/primary DC/exchange test box

    Professionally, I work for a <large storage vendor who happens to own vmware> in the Residency space, so I get out to a lot of customer sites and sit with them for a while to operate the SAN gear and associated products. I've dealt with some good sized vmware installs but my expertise (for now) is more in the hardware space.

    If you want to know anything more send me a message.
     
  14. ACA:Sleeper

    ACA:Sleeper Member

    Joined:
    May 15, 2006
    Messages:
    424
    Location:
    Melbourne's SE Suburbs
    It gets better... I was left with nothing documented, I didn't even have server passwords. :sick: Oh, and the previous guy was half way through server migration when he left......
     
    Last edited: Apr 17, 2008
  15. yanman

    yanman Member

    Joined:
    Jun 4, 2002
    Messages:
    6,587
    Location:
    Hobart
    Ok so who's started rolling out their VI 3.5's?? :p
     
  16. Doc-of-FC

    Doc-of-FC Member

    Joined:
    Aug 30, 2001
    Messages:
    3,296
    Location:
    Canberra
    small patches are pushed via WSUS to all virtual servers on our windows domain. large patches are applied after a VM snapshot.

    linux servers are OEL5, they are maintained to a secure level based on OEL advisories.


    we have different virtual network switches based on VLAN assignment that jack into our core and funnel off to related areas from there, they include but not limited to, 10.2.1 network, 172.17.12 DMZ, 192.168.1 underlying ISCSI network, 10.254.201 Cisco test network.
     
  17. Tony

    Tony New Member

    Joined:
    Jun 26, 2001
    Messages:
    9,987
    Location:
    Sydney, NSW, Australia
    lol?

    you need to talk to tensop - he's the vic edu passwd cracka whitebox server overclocking virtualisation guru
     
  18. Kodaz

    Kodaz Member

    Joined:
    Apr 16, 2004
    Messages:
    938
    Location:
    Brisbane 4124
    I think they were meaning host level security rather than traditional security at the VM level.
     
  19. ACA:Sleeper

    ACA:Sleeper Member

    Joined:
    May 15, 2006
    Messages:
    424
    Location:
    Melbourne's SE Suburbs
    I got through most of the passwords etc, but I might wrangle his help for the VM

    PM me who tensop is.... (I knew I should have been more vague, hope the kiddies don't read this forum, but I want relevant solutions)
     
  20. hapkido

    hapkido Member

    Joined:
    Sep 20, 2003
    Messages:
    291
    Location:
    Brisbane
    Failover....

    Everything was kind of okay here and this was mentioned.
    So, remove 'time restriction and budget' from 'Failover' - they don't mix.

    If its just sematics we are arguing that's fine - I recommend not using the term - as it implies/promises automatic failover without any people interaction and/or offers continuous uptime and/or offers NO dataloss even in the event that a physical host or whole server room unexpectedly goes offline.

    VMware does NOT offer failover. SAN replication does NOT offer failover. But combine these with Server/O.S./Application level clustering + lots of funding, then you are on your way.

    CDP (continuous data protection) and other 'point in time' data protection systems can be implemented to limit the potential downtime that can occur with sudden/catostrophic failures. Often at a much lower price point.



    Regards,
    Hapkido
     

Share This Page

Advertisement: