Ok some great questions...and sorry this post is long. The key is that Nutanix is a software company - the intelligence is entirely in software (inside a VM running on each and every node - also making us hypervisor agnostic). Fundamentally, Nutanix believes : 1. The convergence of Compute and Storage tiers makes sense (especially around performance and simplicity of administration - no more 'islands' of storage and blades etc). Sure, the Nutanix software in the 'controller VM' uses some of the compute resource, but at least now that same resource can be shared between infrastructure and workload VMs. The same reason why "compute" virtualisation became popular (using excess CPU to share amongst apps/servers at the same time) ... now you can do the same to the storage tier. 2. Highly distributed share-nothing software model makes sense (ie. each 'node' is independent of others, but they all work together) 3. Commodity hardware wins, Ethernet wins. Let the software 'work around' failures if/when they occur - who cares about the hardware in that sense. A Google data centre doesn't care about node failures and nor should yours.. 4. A pure scale-out architecture makes sense - start small and grow only when you need to. Fundamentally, since our founders came from Google, they've taken this google-like scale-out model and brought it to enterprise virtualisation...with no need for a "traditional" or dare I say it "legacy" SAN. If you have the time, watch this (16 mins) : http://www.youtube.com/watch?v=XG81gi4pTI4 If you have less time, watch this (5 mins) : http://youtu.be/nSqwAxhFpA8 If you are really buggered for time, watch this (110 seconds!): http://youtu.be/FYF234Bx3Pw Come on - you can spare at least 110 seconds right??! It could change your life.... If you prefer to read, here is a blog post from our CEO: http://www.nutanix.com/blog/2013/06/01/software-defined-storage-our-take/ In that post he also explains why we bundle hardware with our smart software too for those wondering. Here's a pic of what makes up a Nutanix "2U block" of 4 "nodes": Note that there is no 'back plane' here - each node communicates to other nodes (no matter how many) via Ethernet - normally via top-of-rack switches. Yes you need 3 Nutanix nodes as a minimum config in the 2U chassis. Once you start with 3, you can add extra nodes one at a time, forever. Remember each time you add a node, you are essentially getting another storage controller (which is in software!). Have 50 nodes? You now have 50 controllers...all working together and making your life easy. Go and have a beer instead of worrying about LUNs, Volumes, growth, outages, performance drops etc. This concept of software controllers also unlocks the power of "data locality" - essentially the data associated with your VM now 'follows' the VM around the cluster if the VM moves - because the controller VMs are now aware of both that compute and storage tiers...This is a critical point. Josh does a good job of explaining it here: http://www.joshodgers.com/2013/09/19/data-locality-why-is-important-for-vsphere-drs-clusters/ This 'data locality' also ensures performance at scale remains as good as it was on day 1. See some slides here: http://imgur.com/a/pPheb In terms of usable space, I mentioned it on page 1 of this thread, but essentially its about 1/2 the formatted capacity of the disks available to the entire cluster (and that is because there is a 2nd copy of all data in the cluster - so if you write 1MB of data, a 1MB copy is stored elsewhere in the cluster for a total of 2MB 'used'), minus some for our software of course. Want more technical detail check Steve Poitras' page here: http://stevenpoitras.com/the-nutanix-bible/ I just did a deployment today and the customer did most of the work and it took about 45 minutes (from him never seeing it before). There's videos online where people can do it in under 15 mins. At the end there were 4 ESXi hosts each 'seeing' a NFS datastore which is ready for VMs - all in a 2U appliance. Happy days. You could have a complete virtualisation project ordered and deployed within a few weeks (and most of that time is for the block to clear customs!). The great thing also about being 'software-defined' is that improvements in features or optimisation of existing features are delivered by software upgrades (with no cluster downtime too)... ie. in some upgrades we have delivered increased IOPS and reduced latency as the engineering gurus improve the code...all on the existing Nutanix hardware nodes... Cool. Nutanix will be at vForum in Sydney later this month, please drop by if you want to see a Nutanix block in person too and pick up a "No SAN" beer coaster. Shiny. Again, some great questions here. Good stuff. Forgot to add: Backups: Just use whatever you use now for VMs generally (assuming it is an IP based solution). eg. Veeam is a good example but there are others. RAID? No sir. The controller VM simply formats the disks. The controller VMs are in charge of replicating user data and re-replicating it when a new disk is put in for example (and formatting it) after a failure - so the process is quick. Re-replication of data happens automatically, so in fact N+1 can be restored automatically if you have enough nodes. Again, all this is based on how the big web boys like google / facebook etc do it.