Discussion in 'Business & Enterprise Computing' started by link1896, Dec 14, 2016.
I was painting you a picture. Use your imagination to interchange asteroid with whatever other DR scenario you can think up. All your examples can be protected against without tape, or at worst are no more susceptible than tape.
So use MD5 enterprise edition* or whatever technology floats your boat. It's just data in two locations, it's not hard to verify.
*not a real thing.
Data sovereignty means preventing your data from going outside of your own country. Plenty of cloud providers around that can satisfy that requirement.
10PB would hardly raise an eyebrow if you said that to any serious cloud provider. Again, on the DR side plenty of options to satisfy the situation. If you need minimal downtime then you're already running hot/hot or at least hot/warm georedundant sites. Much, much faster to get up and running again than restoring from tape.
Got crappy internets? If your business is like almost every business out there, a good majority of your data hasn't been accessed in ages (apart from your archaic tape backup routines). So the amount you need to restore to get your business running again is much more manageable. Plug in some tiering software your up and running almost instantly, as the data you're pulling back from the cloud is based on usage, not the sequence it was written to on tape.
Internets still too slow or lost them completely? Use something like Amazon's snowball service to move large volumes of data around quickly and offline.
How do you backup to cloud? Because if you replicate rather then archive, and production issue you encounter will be replicated making your data useless... help ATO.
What is your RTO and RPO time constrains on these PBs of data that people are talking about?
Modern DR usually looks like this;
1. Data replication to DR facility.
2. data replication to off-line storage, usually at DR site but can be a 3rd.
3. Long term archive storage, usually done from off-line facility.
Becomes expensive when you want immediate recovery of archival data beyond the last 24 hrs.
Personally I have had failures with differential backup systems and lost data and prefer the old fashion full backup systems, because they have 50 years of proven reliability, while most horror recovery stories usually involve differential partial backup systems, and poor dedupping algorithms, which is important if you have collected PBs of data as that is usually your company's IP, the data collected.
Finally with big data, people have now come to realize that there never is too much data collection!
Work at large backup software vendor, can confirm large shift to cloud storage as tape replacement over the last few years. Many customers storing multiple PB of data.
Glacier is interesting, but you have to be crystal clear on the use case. This is absolutely cold storage that you never want to bring back - and if you do, not in a hurry unless you have deep pockets. Plenty of folks out been burnt with that offering, but if it fits the right profile the economics work well.
Ahahha, yes I will ship 125 Snowballs to my office (note i said >10PB, not 10PB). Which just includes the current, active dataset.
In the event of a DR event, no one else would ever need 'Snowballs' either, so everyone will be waiting JUST FOR ME!!
Do you work in the real world?
here is a site:
Explain to me how Cloud backup will help them during a natural disaster.
And malicious hackers.
Fundamentally it comes down to the cost of non-compliance.
Quite frankly I don't think anyone qualified to speak about it, could stand up in court and say "we did everything we could to ensure that the data was retrievable your honour" if you are leveraging public or even secure cloud (or these fancy new comer SMR drive arrays).
At least not when - as we keep saying - tape has a proven reliability and recoverability track record for half a century - significantly longer than almost any of the cloud guys have existed.
If the business is prepared to just wind it all up and say "peace out" - then ultimately who cares. The risks were presented and the business took that direction.
I work with some industries where people who might not have worked for a company for 50 years could be called up to prove that they did X or Y - and if its really bad - they could go to jail as a result. Are you really going to trust these people's future/livelihood to some cloud vendor?
Peering, ie from your DC kit you can connect to every major ISP / CDN / cloud env for peanuts a month.
~10 years working for big fed gov, you're lucky any of my posts aren't simply keywords and aspirations
But yes, 3 orgs I've worked with ranging from 1PB to 4PB are zero tape.
Again, 90% of all the failures from the floods were in-house DC's which had no redundancy nor planning. The big shift was because big corps who had no redundancy within their own systems.
AWS is for chumps who don't know how to Google There are dozens (if not hundreds) of vendors who are cheaper than AWS, government certified and have been around far longer than AWS. But seeing the actual kit for Glacier, it's disk.... looots of disks
In the case of the BNE floods, they literally forklifted their existing kit into a DC which had one of those magical features such as working electrons. This feature became quite topical when many orgs realised their switchboards were in the basement.....
If it's not dead, it's on life support and people are standing around hoping it'll be ok. It may not die 100%, but in terms of death the doctors are busy gesturing at their throat
Async not sync SAN sync should be local and never a DR policy, DR needs some form of segregation.
Good to see I'm not the only one seeing the shift The force is real
I find it funny when people distrust the "cloud" yet somehow trust their office. Have a look at the security policy of most cloud providers, it absolutely destroys most SMB's / SME's. Factor in the security and multiple layers of entering a DC and from a security perspective, good cloud providers absolutely decimate the security policy of a typical SME.
I can, do and will Our systems store a minimum of three copies of every block on three different systems in realtime. All automagically. With our current system, we'd need to lose ~ 10 servers or 50 drives (at once) before we'd need to look at reverting to backups. If we used erasure coding we could probably save a bit more space again Our most basic storage option designated for backups only is at minimum double parity and even then it's being replaced with further redundancy. Drives are cheap
We complete risk docs for state and federal departments all the time, as well as a few very large companies. It's never been an issue.
basically in-house IT you are trying to replicate a datacenter in your office. its just dumb since there are plenty of datacenters made by people much smarter than you. just arrogance to pretend to know more than them.
We've got thousands of clients (ranging from single site/couple of users to multi site/thousands of users) many still on in-house setups (o365 excluded, planned, deployed and in use for years now )......Bandwidth is expensive as fuck and even dropping 4 figures on a net connection per month is barely going to get to 20mbit upload.
We're paying about $2.5k per month for a 6mbs fibre link from telstra at the site I work at.
So step up and get an AWS Snowmobile. Hell, get a convoy of them. Really, unless your business is also a successful public cloud provider, your definition of scale is completely different to theirs.
At the risk of repeating myself, when a natural disaster takes out the site, their data and systems can still be accessed from anywhere on the globe with an internet connection.
Store your data in a system with the appropriate safeguards like two factor authentication for privileged functions and it gets much harder for hackers to compromise. Make it meet compliance standards like SEC 17a-4 or similar and then not even the sys admins can tamper with the data.
you can reduce labour if you go cloud. plus capital savings, plus utility savings. the internet will always be cheaper overall.
Right up until the first time your business' internet connection gets cut by Bob the Builder digging before dialing, and your entire business grinds to a halt because everything is now just in the cloud someone else's data centre.
yup true, datacentres are all well and good, but we still have a tape unit and a few servers in the building, handy for when the main Syd-Can link goes down and then the secondary link starts running like a turd, like exactly what happened last month and lasted over a week.
as long as you are thinking about all these factors you are doing better than 80% of the dodgy local IT people around the country.
HPES do try, sometimes I wonder if the problems they have are caused by absorbing EDS.
It wasn't that long ago that the entire Northern Territory was cut off from communications with the rest of Australia either...
No no no no. You can get reduced time on infrastructure management but in 90% of the time you won't save money. If you move to the cloud expecting to save money, you'll be in for a rude shock.
This is a false fallacy too. It's only applicable if you're thin client based or some form of remote app, and even then there's more chance of internal failure than there is remote issues. If you have local infrastructure which is "critical" then the same risks apply to "cloud" based systems.
We're 95% "cloud" based for our company, and thanks to the NBN (more failures on fibre than the DSL and midband ethernet) we've reverted to 4GX or similar 7 times this year. Zero productivity loss, especially with a router which can auto cutover. Bigger companies could easily afford redundant internet access via two different means. Meanwhile, we haven't lost connectivity to our DC for more than an hour in well over 6 years.
If the trucks can swim it will be helpful. If they can't... well its going to take a while to ship them over the ocean.
You repeat yourself because you don't read. When a natural disaster takes out their site.. computers don't matter. For all other scenarios where the site still functions, what do they do? Try looking at the link, and realising that they aren't hosting pictures of cats for people to look at. They are internally servicing people. When the links don't work, they still need to work. How will you fix that for them? (I used them because it seemed like a really simple example... obviously not simple enough).
Having it offline would be a little smarter wouldn't it? But now you are referencing SEC standards, so I think the discussion is done.. your just doing the google on 'computer stuff' and posting links.