Hi John, Thanks for jumping in and saying hi! I'll chat to my CEO about it, and let you know. Happy to put something on gluster.org if the boss is cool with it. Well I really appreciate it. Thanks so much. Excellent news. I'll build this into the backup cluster tomorrow, do some testing, and migrate it to production based on the results. That'll be a huge boost for the Mac users. I'm not sure. Gluster does allow for a node to run multiple bricks, but I'm not sure if it can import a brick that was previously owned by another node. I don't really see the problem. You could definitely scrub the old brick, assign it to a new node, and initiate a self heal. You'd be running on 1 copy of the data until the heal is complete, but you could do all that hot on a live system. The redundancy is still there. Also, Gluster is more of a scale-out NAS concept. You're talking about LUNs from central SAN-style block level storage being assigned to nodes. I'm not sure I'd back Gluster with SAN storage, to be honest. That seems kind of pointless (although technically doable). Data is not striped on my setup. You can do that, but I don't. Files are written whole, in a distributed manner around the cluster. If I'm connected to Node 0 via NFS, and request a file, if it's on Node 0 I get it direct. If it's on Node 6, it travels to Node 0 via GlusterFS, and then to me via NFS. By comparison if I connect via FUSE+GlusterFS, I get the file direct from the node it lives on, as I have connections to all nodes, and the ability to query th DHT directly. Fibre 10GbE Ethernet, SFP+ transceivers. http://en.wikipedia.org/wiki/Small_form-factor_pluggable_transceiver#SFP.2B We had a single box in the whole place that was CX-4, but thankfully it's been decommissioned. We had more problems with random packet loss on that one system than anything I've ever worked on. It was terrible. The replacement box is now Fibre 10GbE SFP+ like everything else.