VMWare Enterprise DR options

Discussion in 'Business & Enterprise Computing' started by Rubberband, Sep 10, 2008.

  1. Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,750
    Location:
    Doreen, 3754
    Hi guys,

    I've got a Blade and CX300 SAN VMware setup and I'm looking to create an DR site less than a Km away. I've got access to fibre between the buildings or I can just use Ethernet. I'm also running CommVault for backups (incase there's a decent replication option there).

    I've spoken to our vendor and they've proposed a compatible CX4 array to replicate the data over FC, to which I'll have 3 ESX server boxes connected. Unfortunately, the cost of the EMC software, CX4 and disks etc blows my budget by an extra 20%.

    I've got a $75k budget. $25k for the servers (they need 4 FC and 4 NIC) and I'm estimating another $10k for services, connectivity and others costs, leaving me $40k for storage.

    The data I have to replicate totals 3TB (VMFS and an RDM LUN) but the total storage on the CX300 comes to 6TB.

    Being noobly in the storage side of things, I was wondering what solutions you guys had?

    Cheers.
     
  2. bloodbob

    bloodbob Member

    Joined:
    Feb 12, 2003
    Messages:
    757
    Do you need replication for DR? You should be able to recover off backups. If you don't have backups what is your solution for "A san goes nuts corrupts the data and replications the corruption" scenario?
     
  3. OP
    OP
    Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,750
    Location:
    Doreen, 3754
    Replication offers a much faster recovery time than restoring from disk/tape. Also my VMFS isn't backed up by CommVault so replication removes a layer of complexity.
     
  4. bloodbob

    bloodbob Member

    Joined:
    Feb 12, 2003
    Messages:
    757
    Yes it does however if that is a requirement of Business Continuity you will have a business case to get that extra 20% :) Particularly if it is a publicly listed company and major shareholders find out.
     
  5. OP
    OP
    Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,750
    Location:
    Doreen, 3754
    Sadly I'm in education and it's not going to work like that :)

    I'm also thinking that there must be another cost effective solution. Block replication across compatible SAN technologies is the most efficient but I'm sure there must be something else that sits a little lower in cost/efficiency.
     
    Last edited: Sep 10, 2008
  6. Simwah

    Simwah Member

    Joined:
    Aug 6, 2005
    Messages:
    1,998
    Location:
    Brisbane
  7. OP
    OP
    Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,750
    Location:
    Doreen, 3754
  8. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    Unfortunately DR is one of those things where SAN vendors wet themselves with happiness over the money you're about to pay them.

    You need to clearly define exactly what DR capability you want. Technically your cheapest option is going to be to send incremental tapes offsite each day.

    Is this going to be a hot DR site,
    ie, should people be able to walk right in and use up to the second data instantly?

    Will ALL the people be doing that? ie can you buy slower heads / slower disks to replicate to?

    Or can you simply buy a smaller, slower array again and backup (in real time) only the critical data.

    What critical data an educational site is producing in real time, I don't know :)
     
  9. bloodbob

    bloodbob Member

    Joined:
    Feb 12, 2003
    Messages:
    757
    Also you might want to ask elvis if it is possible meeting you requirement with homebrew linux/bsd SANs for $40K.
     
  10. OP
    OP
    Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,750
    Location:
    Doreen, 3754
    I'll clarify as I probably provided too much info in an effort to reduce questions :)

    The expensive part is the SAN replication due to having to find compatible SAN storage for EMC software. I've got $75k outlined as a budget but it looks like I'll have to do some major juggling to get the best solution in place (1 to 1 replication).

    What I'm interested to hear is what others would suggest or use in a similar situation.

    Ideally it would be hot, but I'm expecting a warm to slightly toasty DR, where I power on the ESX servers and attach the VM's, unless vReplicator or SRM offer a reasonably priced solution.

    I'm thinking, if my budget gets trimmed, that I might just get some off-site storage and clone my VM's to it (or use vRanger) do a raw copy of my data via CommVault and offer a slower RTO.

    This linux concept is great, but the issue is that manageability has to be straightforward for all I.T. staff. Considering I'm the only server guy here, I'd get considerable resistance to deploying an 'unknown' OS as they'd be dependant on support. I'll put it as a plan b :)
     
  11. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    By the way, hot DR site usually = exact replication of your core servers, including licensing. I haven't seen too many things that say "it's cool to run a second copy of this for DR for free" and only a couple that say "primary license is x, secondary dr license that can be used only in event of failure is 10% of x".
     
  12. OP
    OP
    Rubberband

    Rubberband Member

    Joined:
    Jun 27, 2001
    Messages:
    6,750
    Location:
    Doreen, 3754
    That's covered...site licenses at Uni's ftw :)
     
  13. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    Vmware have site licences? God, I can't imagine how much that cost.

    I think you're leaning more toward redundancy here than DR as almost any serious disaster (earthquake, fire, flood, power loss, godzilla) is probably going to affect both buildings within 500m.

    Perhaps buying lower power hardware and clustering only the critical servers across would be more your thing?
     
  14. yanman

    yanman Member

    Joined:
    Jun 4, 2002
    Messages:
    6,587
    Location:
    Hobart
    Your setup is remarkably similar to ours, unfortunately not with the same budget constraints.

    We're currently replicating data from CX400 to CX500 with MirrorView/S over our own fibre assets (two fabrics, 4Gbps FC connection per fabric). Current servers are still on ESX 2.5.4 and data is backed up by CommVault to an LTO4 tape library (including VMFS) using a dedicated blade as a media-agent for Commvault.

    As of this quarter we're actually commissioning the replacement to the above in which we're moving to 2 x CX3-40's and MirrorView/A and implementing VMware Site Recovery Manager (SRM), ESX 3.5, VC 2.5 but maintaining the way we backup for the moment.

    This isn't cheap especially with the MirrorView licenses. I'm curious why they quoted you for CX4 though, that's their latest model. What about a CX3-10 or CX3-20?

    I reckon you'll struggle to find another product to replicate data directly from the SAN other than using EMC's own, however what about building a server as a media-agent and pushing it to a cheaper solution at the DR-site, i.e. something with direct-attach storage.

    Why the need for 4 FC connections per server? :p
     
    Last edited: Sep 10, 2008
  15. bloodbob

    bloodbob Member

    Joined:
    Feb 12, 2003
    Messages:
    757
    2 Per switch to two switches would be my guess. It might be over kill one to each is probably sufficient. But I'm just guessing.

    Maybe you can work something out to decrease the upfront cost but increase the maintenance costs.
     
    Last edited: Sep 10, 2008
  16. kempo

    kempo Member

    Joined:
    Mar 7, 2006
    Messages:
    7
    Location:
    The Land of Chocolate
    have a chat with a rep from Netapp.
     
  17. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    That's kind of like telling the guy looking to chip his wrx to talk to a porche dealer :p
     
  18. yanman

    yanman Member

    Joined:
    Jun 4, 2002
    Messages:
    6,587
    Location:
    Hobart
    emc wrx to netapp porsche? :lol::lol: hilarious
     
  19. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    i have a ferrari eva, very fast
     
  20. lavi

    lavi Member

    Joined:
    Dec 20, 2002
    Messages:
    4,004
    Location:
    Brisbane
    You haven't specified why you want DR. In order to do it properly 1KM away is kinda stupid as most likely you will be on the same power grid, internet backbone/exchange etc. etc. etc.

    factor in price of getting your own subnet and running BGP as well

    decide if you trully DO need a san in the DR site and find out how long your infrastructure can be run on comodity hardware first, for example i use freenas in a DR site and replicate to it, freenas with iSCSI also runs just like your CX albeit faster or slower pending on how you set up your drives etc.

    maybe as a DR san look at the SATA version of the CX i think it's around 12k for 5TB raw

    but not knowing your needs hard to give you an answer lol but 1km DR site is pointless in my book unless you got a StarGate between the DR sites :)
     

Share This Page

Advertisement: