1. OCAU Merchandise now available! Check out our 20th Anniversary Mugs, Classic Logo Shirts and much more! Discussion here.
    Dismiss Notice

Nutanix

Discussion in 'Business & Enterprise Computing' started by GooSE, Oct 4, 2013.

  1. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    And vmware datastores on FC LUN's are what?
     
  2. millennia

    millennia Member

    Joined:
    Dec 10, 2013
    Messages:
    19
    Nutanix can consume iSCSI storage, in fact it is a useful case for not "ripping and replacing" when customers have already heavily invested in SANs, as Nutanix can deliver it's own workload and consume iSCSI connected storage

    A good case would be a large SQL Server where the VM runs on Nutanix, and locates swap and tempdb there but the data disks are attached through the MS iSCSI client from the VM to their original SAN based locations.

    This setup can allow you to place data that needs real grunt on all flash SAN storage for just specific VMs in the cluster, allowing Nutanix to pick up all the other workloads that it's 35,000 8K IOPS per node can easily handle.

    Let's not forget also that unlike VSAN and Maxta Nutanix is hypervisor agnostic, so you can run VMware, Hyper-V, or KVM workloads on the same Nutanix cluster to meet many different requirements.
     
  3. Jase

    Jase Member

    Joined:
    Jun 28, 2001
    Messages:
    196
    Location:
    Sydney 2081
    Are you for real ?? omg wtf lol !
     
  4. ebar

    ebar Member

    Joined:
    Oct 5, 2013
    Messages:
    15
    how many ESX hosts attach to 1 LUN?

    LOL OMG ROTFL etc etc.

    Cheers, thanks for that info, I wasnt aware.

    ISCSI runs over ethernet. In my experience if a customer is heavily invested in SAN they can tend to be reluctant to invest in 10G ethernet which is where you need to go to compete vs 4Gb/8Gb SAN. Its a bit of a big step to move to IP storage from SAN. However, I reckon its worth it, but thats a simplistic statement for a complex issue.

    I m glad to hear it supports blocks over ethernet. Im just waiting for the IPO to be honest :)
     
    Last edited by a moderator: Dec 15, 2013
  5. JoshOdgers

    JoshOdgers New Member

    Joined:
    Oct 18, 2013
    Messages:
    7
    Disclaimer: Nutanix Employee

    (and sorry for slow reply)

    Regarding matching traditional storage systems for Redundancy, I would suggest Nutanix can provide more redundancy than a traditional storage system.

    For example, the vast majority of traditional storage solutions have 2 controllers, Nutanix has one controller per node, and in the event of a failure, the cluster repairs itself. It does this using all controllers and drives in the cluster so it is substantially faster than a traditional storage solution which does a RAID rebuild, which only uses a limited number of drives (and controllers) for the rebuild. (and RAID rebuilds long duration increases the chance of subsequent failures which may lead to downtime and/or data loss.)

    I wrote an article describing scale out shared nothing resiliency here if your interested.

    http://www.joshodgers.com/2013/10/26/scale-out-shared-nothing-architecture-resiliency-by-nutanix/

    Regarding Performance, there are several advantages with scale our shared nothing. You can create silos of performance (and avoid things like noisy neighbour problems) without creating silos of capacity and management. So its like the best of both worlds.

    Another advantage Nutanix provides is write I/O is distributed across all controllers in the cluster, which ensures high performance as well as resiliency. You are not limited to the performance or capacity of just two controllers for a VM (or object) which is a constraint of most storage solutions new and existing in the market.

    Performance can also be easily scaled, in small to large increments without swapping/replacing controllers, and without having to replace existing equipment.

    The below shows an example of Nutanix scaling to beyond 1 Million IOPS with linear performance.

    http://www.joshodgers.com/2013/10/24/scaling-to-1-million-iops-and-beyond-linearly/

    In summary, Nutanix can provide more resiliency and equivalent or higher performance than traditional storage systems, with many additional benefits. Scale out shared nothing architecture in my opinion will be a significant part of the storage market in future for virtual environments.

    Hope that makes sense.

    also I noticed some discussion about storage protocol support so I thought I would clarify,

    Nutanix supports NFS , iSCSI and SMB 3.0.
     
    Last edited by a moderator: Dec 15, 2013
  6. diomedes

    diomedes Member

    Joined:
    Oct 20, 2003
    Messages:
    14
    finally a thread that gets me out of the shadows, I've just opted to replace a chunk of my infrastructure and deploy VDI using Nutanix.

    The service levels provided by Nutanix have been second to none, one of the best experiences I've had in 18 years.

    I will report back when I've completed the pilot and got it under load. I did do a POC with Nutanix and was quite happy with that result.
     
  7. DavidRa

    DavidRa Member

    Joined:
    Jun 8, 2002
    Messages:
    3,069
    Location:
    NSW Central Coast
    Just catching up after a holiday - guess what - the answer is NOT one as you appear to think. In fact, just like a Hyper-V cluster (at least nowadays, I'm excluding Hyper-V in 2008 non-R2) multiple hosts connect simultaneously to single LUNs, be they iSCSI or FC. Different hosts will have files open on the same LUN. At the same time.

    We'll be interested to hear your outcomes, diomedes :)
     
  8. ebar

    ebar Member

    Joined:
    Oct 5, 2013
    Messages:
    15
    When a LUN is presented to a host it has NOT got a filesystem on it, so the host puts a filesystem on it, using that 1 inode.
    Next what you do on top of the filesystem is up to the host.

    But I bet you wont get it. I recommed SAN for dummies.
     
  9. diomedes

    diomedes Member

    Joined:
    Oct 20, 2003
    Messages:
    14
    So a bit of an update I've just about completed our Vmware View roll out using Nutanix kit, and I have to say I am beyond pleased. So much so I accelerated plans to retire additional racks I was going to keep in production for another 24 months.

    My Nutanix cluster is made up of 2x NX7110 nodes with Nvidia Grid cards and 1x 3060 Compute node with a 10GBe Arista switch connecting them. I will be adding additional storage nodes later this year to totally replace my existing SAN infrastructure.

    I've been using SunRay thin clients and Terminal services for the best part of 8 years for my task users, that is Email, Office and ERP. We also have engineering teams that up until now had to have fat clients for their CAD CAM application due to the app not playing well with other and needed dedicated GPU.

    When it came time to refresh the fat clients and a large part of the server infrastructure I knew I wanted to converge the lot if it was possible and after some hunting I found Nutanix.

    In terms of sizing we have been pretty good to the users with the CAD guys getting 2x vCPUs, 4GB of Memory and 512mb of GPU... we run our Grid cards in vSGA mode.

    I would strongly suggest you not attempt to run CAD without a Grid Card, believe me it makes a monumental different. The POC kit I got didn't have them and CAD wasn't viable on it, I took a leap of faith that the Grid cards would fix that, and fix it they well and truly did. The Nutanix N7110's also have plenty of expansion for additional cards, so they scale nicely.

    At present I have 118 VM's, 14 are server's including a heavy Enterprise SQL box and the rest are Linked Clone VDI images. I am running 5 different pools, but the bulk of the users are split up between the engineering Win 8.1 image and the task user's Win 2008 R2 image.

    Originally I wasn't going to use VDI for the task users just move the terminal server to vSphere, but with View 5.3 adding support for 2008 R2 server as a desktop and it's ability to easily theme to Win 7 it was an easy choice to make, and I already had all the TS Cals anyway so I could avoid additional VDA licenses from MS.

    Attached is a snapshot of my Nutanix dashboard during a recompose of some of the pools.

    [​IMG]

    and vSphere overview;

    [​IMG]
     
  10. DavidRa

    DavidRa Member

    Joined:
    Jun 8, 2002
    Messages:
    3,069
    Location:
    NSW Central Coast
    In this case, you were and are demonstrably wrong. Remember, I'm talking about presenting the LUN to the cluster, not to a single server. Also, inodes have absolutely nothing to do with it (it's not a UNIX filesystem) - but I don't see how a single cluster node has anything to do with the statement.

    A FC or iSCSI LUN for Hyper-V in a Windows cluster, operating normally, is online and available for IO on multiple physical hosts at the same time. Go look up CSV - Cluster Shared Volumes - it's on TechNet. The physicaldisk object is the same path, on all cluster nodes, when configured for CSV (C:\ClusterStorage\Volume01 is the default for the first CSV). It is formatted with NTFS before being brought online. (Also - I am specifically excluding Redirected Access mode from this discussion).

    One node is the controlling node, the others are not. Create C:\ClusterStorage\LUN\File.VHDX on node1, and node2 sees that file. Live. Or vice-versa. Close it on node1, and add it to a VM on node2 (as a VM disk) and node2 will read from and write to the VHD/VHDX over the storage fabric. The controlling node updates and manages file metadata (renames, directories) and is the broker for open/close of files.

    I'm no vSphere expert, hell I can barely spell it, but I'm pretty sure the same applies for vSphere. Oh, and KVM and Xen, too (though with all three it's more common to use NFS, I believe).

    So let's revisit your statements:

    On one level, yes, a single block is part of a single file, accessed by a single VM (if I mention Scale Out File Servers, your head will explode - go read about it on TechNet. Learn something.). But the LUN is available to, online on, accessed by, read from and written to simultaneously by multiple hosts in the cluster.

    In a single Windows cluster: 64 hosts attach to the same LUN.

    Info here: http://technet.microsoft.com/es-es/library/cc732181(v=ws.10).aspx (not that you'll read it) but I quote a relevant point:

    For the Nutanix guys - does the Nutanix kit do aggregation of the storage and provide Scale Out File Servers, in the Hyper-V / SMB3 implementations? Or is it a different approach (and the storage appliance is still Linux)?
     
  11. Glide

    Glide Member

    Joined:
    Aug 22, 2002
    Messages:
    1,151
    Location:
    Was: Sydney Now: USA
  12. millennia

    millennia Member

    Joined:
    Dec 10, 2013
    Messages:
    19
    Hyper-v on Nutanix

    Hi, Hyper-V (as in Windows 2012R2) has been supported since January, and the current 3.5.4 release is recommended as it has performance tuning in there for Hyper-V. You need to image the Nutanix nodes with the Hyper-V hypervisor and install the Nutanix CVM, but this is all done with a tool supplied by Nutanix called Foundation.

    Typically you can go from a bare uninstalled Nutanix block (4 nodes) and using the Foundation tool have a working cluster with the hypervisor of your choice configured and accessible in about 40 minutes to an hour.
     
  13. Glide

    Glide Member

    Joined:
    Aug 22, 2002
    Messages:
    1,151
    Location:
    Was: Sydney Now: USA
    Ok awesome. Thanks for that info.

    And Nutanix should respond to VSS snapshot requests from the MS provider and snapshot the SMB3 share?

    Please assume at this point that I know next to nothing about Nutanix :) - in order to perform SMB3 snapshots according to microsoft the file server must be running windows 2012+. On that box (fileserver that provides the SMB share), you have to install the "File Share Shadow Copy Agent".

    So I'm trying to figure out how this all fits into the Nutanix paradigm.
     
    Last edited: Jun 10, 2014
  14. millennia

    millennia Member

    Joined:
    Dec 10, 2013
    Messages:
    19
    Nutanix and Hyper-V

    I couldn't tell you about VSS integration when snapshotting the SMB share. In VMware the VM disks are snapshotted in a crash consistent way.

    You don't run a user SMB share natively on Nutanix, it is a platform to run Hyper-V VHD disk files, and the VHD files exist on the Nutanix hosted Hyper-V hypervisor, with SMB as the access method (as NFS is for VMware).

    I would expect you to have a Windows VM running on that platform as a fileserver. You can then take VSS consistent snapshots of that fileserver.

    Hope this is what you mean.
     
  15. GiantGuineaPig

    GiantGuineaPig Member

    Joined:
    Oct 23, 2006
    Messages:
    4,027
    Location:
    Adelaide
    Interesting to see the discussion on Nutanix - they wined and dined me while I was in the US last month. Their solution seems pretty good, just being able to buy a single chassis and having redundancy fully built into that is nice.
     
  16. Iceman

    Iceman Member

    Joined:
    Jun 27, 2001
    Messages:
    6,647
    Location:
    Brisbane (nth), Australia
    I think the redundancy model needs some work.

    http://nutanix.blogspot.com.au/2014/05/40-features-fault-domain-awareness.html

    Also last I checked there was no way to soft fail a node (for patching esx for example). So the instant you take it down, it starts rebuilding. Then you bring it back, it rebuilds the other way, then you wait, then you take the next one down.. etc..

    Not a biggie if you only have a block or two..
     
  17. GiantGuineaPig

    GiantGuineaPig Member

    Joined:
    Oct 23, 2006
    Messages:
    4,027
    Location:
    Adelaide
    Hmm what's wrong with the redundancy model? That's going over my head a bit. If there's no soft way to fail a node that's rather average, but I wouldn't really think Nutanix makes sense once you go past two blocks. 2-4 nodes in 1 block I'd think would be best, due to the cost savings you'd get?
     
  18. millennia

    millennia Member

    Joined:
    Dec 10, 2013
    Messages:
    19
    Nutanix Redundancy

    OK, I don't understand why you point to an enhancement document and say it needs a little work. In versions sub 4.0 you can have a node fail, and once the system has recovered the data so it knows it has two secure copies of your data again you could the have a second node failure without data loss even if the first node was still offline.

    With NOS4.0 this has been extended to a BLOCK level. What this means is that Nutanix is block aware in this mode and therefore the second copy of your VM blocks is always placed on a node in a different block. This means that if something catastrophic should happen to a whole block then no data is lost. This is not the same as the cluster surviving 2 or 4 simultaneous failures though, just about losing a block for something silly like both power supplies in the same rack PDU and that PDU trips for example.

    Soft fail is not a requirement and you mis-state what happens if a node is rebooted (say for ESXi update as you say). What in fact happens is that traffic is redirected to another node to continue to retrieve VM data via it's secondary copy and if within 5 minutes the node returns then everything returns to normal.

    After 5 minutes the process to start failing the node occurs but again if the node returns at any time before the ejection process has been completed (i.e. when secure secondary copies of the data are assured) then all returns to normal - there is no "copy back" process as that would be ludicrous.

    What happens in this case is that a curator process runs in the background and cleans up any over redundant copies of the data blocks so there are just 2 copies for each data block in the cluster (unless you have a replication factor of 3 in which case 3 copies are kept and this allows for 2 simultaneous node / block failures).

    What you need to understand is that the blocks of data are distributed around the cluster - you don't have all the blocks from 1 VM going to 1 specific failover node. Therefore the amount of data you have to recover for a specific VM from a failed node is not the entire amount of data for the VMs running on the failed node, and because of the distributed nature of the cluster ALL the remaining nodes participate in the rebuild process, so as you add nodes recovery from a failure gets quicker and quicker, not slower as it would with RAID.
     
  19. millennia

    millennia Member

    Joined:
    Dec 10, 2013
    Messages:
    19
    Nutanix only good for 8 nodes???

    The largest Nutanix cluster is 2000 nodes.

    Think no more needs to be said there... :leet:
     
  20. GiantGuineaPig

    GiantGuineaPig Member

    Joined:
    Oct 23, 2006
    Messages:
    4,027
    Location:
    Adelaide
    If I had 2000 nodes running, I'd be able to afford a dedicated storage guy who would probably both prefer and have the ability to better manage dedicated SANs with a bit more flexibility.

    My understanding on cost was that Nutanix is cheaper because of the saving you see on not needing a SAN.

    Talking generally of course, there's never a 'one ring to rule them all' type rule.

    Edit: If you're a Nutanix employee you should say so, since you've signed up and only spoken in this thread :)
     

Share This Page

Advertisement: