1. OCAU Merchandise is available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion in this thread.
    Dismiss Notice

Linux based Shared storage?

Discussion in 'Business & Enterprise Computing' started by QuakeDude, Apr 24, 2012.

  1. QuakeDude

    QuakeDude ooooh weeee ooooh

    Joined:
    Aug 4, 2004
    Messages:
    8,711
    Location:
    Melbourne
    Hey Guys,

    Got an interesting problem here - we've been asked to provide a storage solution for a new application being rolled out. The requirements are as follows:

    1) Linux based (Redhat enterprise)
    2) provide a way to effectively share the same hard disk between three servers
    3) VMware backend

    The purpose if this is to provide maximum performance and availability for the three application servers. Network mapping may end up being too slow a solution, which is why they've requested a way to do this by sharing the same "drive" between the three servers.

    We're not redhat experts, but from what we can tell, Redhat has a cluster suite which includes the various services and a cluster aware filesystem. Is this the best way to do this?
     
  2. fR33z3

    fR33z3 Member

    Joined:
    Jul 16, 2001
    Messages:
    2,164
    Location:
    Perth
    so does NFS constitute "network mapping"? No? Problem solved!

    Do you have shared storage already (ie a SAN)?

    If so, is RDM an option (ie present the LUN to the guests directly, NOT the hypervisor)? If so, then pick your favourite linux clustered file system and away you go.

    If your storage is effectively local to esx and you don't want to create a storage appliance, then maybe you can apply this trick:- http://professionalvmware.com/2010/04/shared-vmdks-on-vsphere-esx-and-esxi/ THis is a bit of a hack though, and if its a business critical app, you probably want the software vendor to provide a design guide. eg, I don't think Microsoft officially supports this method for cluster services, even though people have gotten it to work.
     
  3. pineappl2

    pineappl2 Member

    Joined:
    Mar 23, 2011
    Messages:
    34
    A clustered filesystem will be slower than one nfs server. The cluster will sync writes across all nodes, which is a lot more i/o. Clusters are optimal on physical boxes not esx, unless you lock/guarantee cpu/disk resources.

    i would suggest an nfs server guest and let esx handle the high availability. Since its esx, you can just run up a guest & see what performace is like.

    When you say share a disk, do all servers have to see the data at the same time ? ie the same app on all 3 ?
     
  4. OP
    OP
    QuakeDude

    QuakeDude ooooh weeee ooooh

    Joined:
    Aug 4, 2004
    Messages:
    8,711
    Location:
    Melbourne
    Interesting info guys,will go through this with the guys when Im back in tomorrow.

    Yeah, we've been told by the software vendor that all three servers have to see the "same" disk from a redundancy point of view. They've been pushing the concept of "Shared SAN storage" for a while now, without being able to actually explain how we are supposed to achieve this technically.

    In short - they're useless :lol:
     
  5. elvis

    elvis OCAU's most famous and arrogant know-it-all

    Joined:
    Jun 27, 2001
    Messages:
    46,810
    Location:
    Brisbane
    RedHat recently bought Gluster, and now have this offering as a commercial product:

    http://www.redhat.com/products/storage/storage-software/

    Gluster really requires a minimum of 2-4 storage nodes to be effective. It can export volumes via a native Gluster client, or as NFS or CIFS mounts.

    If you've only got a single storage node, I'd go with just a standard RHEL box exporting everything via NFS, due to the VMWare requirement.

    If you were using KVM or Xen, I'd suggest RHEL+iSCSI as another option.
     
  6. passivekid

    passivekid Member

    Joined:
    Sep 10, 2003
    Messages:
    332
    Location:
    Perth, WA
    When exporting iSCSI, do I a seperate iSCSI share for each VM?
     
  7. elvis

    elvis OCAU's most famous and arrogant know-it-all

    Joined:
    Jun 27, 2001
    Messages:
    46,810
    Location:
    Brisbane
    I prefer that for better I/O performance when using KVM and Xen, but there's pros and cons to it (like everything).

    I'm told recent ESX builds have sorted some of their iSCSI problems out, but I also still hear that NFS is still faster, which makes me question that.
     
  8. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    Push back, ask them for specifics as to what their software is most often implemented on.
     
  9. username_taken

    username_taken Member

    Joined:
    Oct 19, 2004
    Messages:
    1,352
    Location:
    Austin, TX
    don't forget that if you're going to run vmware enterprise edition on shared storage you'll get redundancy / high availability without needing to do anything at the application. Worst case even if you lose the physical server that it's running on it'll just restart on another server and you'll just lose 5 minutes of uptime and break any existing sessions.
     

Share This Page

Advertisement: