Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Business & Enterprise Computing

Notices


Sign up for a free OCAU account and this ad will go away!
Search our forums with Google:
Reply
 
Thread Tools
Old 14th December 2016, 3:23 PM   #31
looktall
Working Class Hero
 
looktall's Avatar
 
Join Date: Sep 2001
Location: brabham.wa.au
Posts: 22,920
Default

Quote:
Originally Posted by NSanity View Post
well, now. technically the data probably isn't lost.
they know exactly where it is, they just can't access any of it.
looktall is offline   Reply With Quote

Join OCAU to remove this ad!
Old 14th December 2016, 3:26 PM   #32
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,855
Default

Quote:
Originally Posted by looktall View Post
well, now. technically the data probably isn't lost.
they know exactly where it is, they just can't access any of it.
I don't think they brought another SAN (or pair) in to restore to, then plan to data recover, then merge afterwards.

That PB is gone.
NSanity is offline   Reply With Quote
Old 14th December 2016, 3:28 PM   #33
elvis
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,897
Default

Quote:
Originally Posted by looktall View Post
they know exactly where it is, they just can't access any of it.
"chmod 777" am i rite?
__________________
Play old games with me!
elvis is offline   Reply With Quote
Old 14th December 2016, 3:29 PM   #34
looktall
Working Class Hero
 
looktall's Avatar
 
Join Date: Sep 2001
Location: brabham.wa.au
Posts: 22,920
Default

Quote:
Originally Posted by elvis View Post
"chmod 777" am i rite?
isn't that how everyone does it?
looktall is offline   Reply With Quote
Old 14th December 2016, 3:30 PM   #35
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,855
Default

Quote:
Originally Posted by looktall View Post
isn't that how everyone does it?
WEBSITE WORKS AGAIN!
NSanity is offline   Reply With Quote
Old 14th December 2016, 3:30 PM   #36
PabloEscobar
Member
 
Join Date: Jan 2008
Posts: 9,257
Default

Quote:
Originally Posted by elvis View Post
"chmod 777" am i rite?
It was already 777'd, and someone opened an E-mail from "Ostraya Post" to see where their package was, and why it hadn't been delivered.

And now, someone in Russia has the keys to their encrypted data.
PabloEscobar is offline   Reply With Quote
Old 14th December 2016, 3:32 PM   #37
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,855
Default

Quote:
Originally Posted by PabloEscobar View Post
It was already 777'd, and someone opened an E-mail from "Ostraya Post" to see where their package was, and why it hadn't been delivered.

And now, someone in Russia has the keys to their encrypted data.
omfg. If it was ransomware, I'll lose my shit.

That would just be too damned hilarious.

Pity that snapshots would simply let you roll it back... unless they changed enough data to expire the existing stuff...
NSanity is offline   Reply With Quote
Old 14th December 2016, 3:43 PM   #38
link1896 Thread Starter
Member
 
link1896's Avatar
 
Join Date: Jul 2005
Location: Melbourne
Posts: 354
Default

Quote:
Originally Posted by looktall View Post
isn't that how everyone does it?
gui only I fear. console is where the devil hides


link1896 is online now   Reply With Quote
Old 14th December 2016, 8:24 PM   #39
Daemon
Member
 
Daemon's Avatar
 
Join Date: Jun 2001
Location: qld.au
Posts: 4,755
Default

Anyone rolling out a SAN these days for large data sets is a chump. They're in the same category of "nobody got fired for buying Cisco" type thinking.

The rest of the world as moved on. There's a reason that the Google / AWS world doesn't see these issues, they can't happen easily. SAN's traditionally have 1-2 controllers per shelf and despite all the marketing hype of distributed workloads most don't distribute data integrity checking. If a controller goes rogue with corrupt data, none of the other controllers have the ability to verify.

Well designed, distributed, block storage takes care of this issue. Ceph (as one example) has multiple metadata servers and the more you have, the greater the fault tolerance / data security (min 3 to form a quorum). They also typically work on n+2 for block storage or erasure coding to then give that ability to recover if one or more nodes goes rogue.

That said, because governments and big enterprise love new tech I expect they'll roll SAN's out for the next 10+ years
__________________
Fixing the internet... one cloud at a time.
Daemon is online now   Reply With Quote
Old 14th December 2016, 10:44 PM   #40
wintermute000
Member
 
wintermute000's Avatar
 
Join Date: Jan 2011
Posts: 749
Default

how do you see distributed IP storage Ceph/Gluster etc. vs hyperconverged, or is it all the same, just whether you do it on dedicated metal vs sharing with compute metal?

Is there ANY use case for a traditional SAN still? (specific dedicated high performance? mid-market i.e. rolling Ceph too complex for a small IT team with relatively modest requirements?)

Quote:
Originally Posted by PabloEscobar View Post
Public Sector Mindset - As long as this is someone elses fault. It's all good.
Vendors know this, and accept the blame, because accepting the blame is why they bake in massive margins to their Public Sector quotes.
Yep. Have seen the inside of the beast up close and personal and the incompetence/ass-covering/sheer wastage/complete amateur hour is mind boggling

Last edited by wintermute000; 14th December 2016 at 10:50 PM.
wintermute000 is online now   Reply With Quote
Old 15th December 2016, 9:09 AM   #41
Daemon
Member
 
Daemon's Avatar
 
Join Date: Jun 2001
Location: qld.au
Posts: 4,755
Default

Quote:
Originally Posted by wintermute000 View Post
how do you see distributed IP storage Ceph/Gluster etc. vs hyperconverged, or is it all the same, just whether you do it on dedicated metal vs sharing with compute metal?
All the same, except hyperconverged is a more efficient use of space

Quote:
Originally Posted by wintermute000 View Post
Is there ANY use case for a traditional SAN still? (specific dedicated high performance? mid-market i.e. rolling Ceph too complex for a small IT team with relatively modest requirements?)
Even complexity isn't an excuse. VMWare do hyperconverged systems now, so you have simple (albeit expensive) point and click systems. If you need performance, you go Nutanix so there's no reason there either. There's plenty of other vendors for all of the above too, you'll find traditional SAN vendors scrambling for answers in this area very soon.
__________________
Fixing the internet... one cloud at a time.
Daemon is online now   Reply With Quote
Old 15th December 2016, 11:53 AM   #42
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,855
Default

Quote:
Originally Posted by Daemon View Post
All the same, except hyperconverged is a more efficient use of space


Even complexity isn't an excuse. VMWare do hyperconverged systems now, so you have simple (albeit expensive) point and click systems. If you need performance, you go Nutanix so there's no reason there either. There's plenty of other vendors for all of the above too, you'll find traditional SAN vendors scrambling for answers in this area very soon.
Agreed. This is where the market is going. Hyperconverged is the most efficient use of resources - and Microsoft have proved its no slouch either. - https://blogs.technet.microsoft.com/...spaces-direct/

I believe that VSAN isn't quite as competitive as what MS is offering - Nutanix doesn't do RDMA yet - so MS probably has the lead there too.
NSanity is offline   Reply With Quote
Old 15th December 2016, 2:33 PM   #43
scrantic
Member
 
Join Date: Apr 2002
Location: Melbourne
Posts: 1,629
Default

Quote:
Originally Posted by NSanity View Post
Agreed. This is where the market is going. Hyperconverged is the most efficient use of resources - and Microsoft have proved its no slouch either. - https://blogs.technet.microsoft.com/...spaces-direct/

I believe that VSAN isn't quite as competitive as what MS is offering - Nutanix doesn't do RDMA yet - so MS probably has the lead there too.
Curious to know you using storage spaces direct in production yet? Something that interests me given we're going through a hardware refresh.
__________________
System| Intel Core i7-860 | Gigabyte GA-P55A-UD3P |
| Intel 530 180GB | 8GB Corsair DDR3 1333 |
| MSI GTX275 896MB| Antec P183 | Antec 750W PSU |
Storage Synology DS1511+ 4 x Hitachi 3TB Deskstar 5K3000
scrantic is offline   Reply With Quote
Old 15th December 2016, 4:44 PM   #44
GreenBeret
Member
 
GreenBeret's Avatar
 
Join Date: Dec 2001
Location: Melbourne
Posts: 19,377
Default

Quote:
Originally Posted by wintermute000 View Post
how do you see distributed IP storage Ceph/Gluster etc. vs hyperconverged, or is it all the same, just whether you do it on dedicated metal vs sharing with compute metal?
Don't do hyperconverge unless you really hate yourself. When shit go fucked up, which one do you think caused the problem? It's hard enough troubleshooting storage or compute cluster by itself.

The load on Ceph OSD processes is always going to be high, and that's gonna take away from your VMs running on the same host. Unless your VMs don't do much (in our major zones, they run at ~90% utilisation per core), compute performance will suffer.

Then you have the problem of not being able to scale compute and storage separately. You don't always have the same compute to storage ratio in the near future.
__________________
"If your family was captured and you were told you needed to put 100lb onto your max squat within two months or your family would be executed, would you squat once per week? Something tells me that you'd start squatting every day." - BrozScience
GreenBeret is offline   Reply With Quote
Old 15th December 2016, 6:57 PM   #45
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,855
Default

Quote:
Originally Posted by scrantic View Post
Curious to know you using storage spaces direct in production yet? Something that interests me given we're going through a hardware refresh.
We have some stuff up on 2016 S2D. ~40VM's. Mix of Exchange, SQL, AD, RDS's etc.

Box is basically our old ZFS box, and we have *loads* better performance out of it.

SMB3 works. SMB Direct Works. S2D works.
NSanity is offline   Reply With Quote
Reply

Bookmarks

Sign up for a free OCAU account and this ad will go away!

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 10:14 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!