Discussion in 'Business & Enterprise Computing' started by link1896, Dec 14, 2016.
Yes. And have your weekly backups take a fortnight.
If you do not verify your backups by doing full recovery testing on them then you have not backed up regardless of media used! THIS IS NOT A NEW CONCEPT, IT IS STANDARD PRACTICE FOR THE PAST 50 YEARS.
For the life of me I cannot understand how people have so much trouble with backups, disaster recovery is IT 101 and should be the first and highest priority for any organization. Anyone who is responsible for IT should know that their recovery points work and are not compromised otherwise they are shit and should find a new vocation.
Sure, spin up some VMs in said cloud and connect to data to confirm all is OK. Beats playing with tapes. As an added bonus this doubles as the DR plan that ensures systems & data are still available for that time when an asteroid takes out your primary site and there are no more tape drives or servers around to recover the data to. Sure your throughput may be down and latency up depending on your network connectivity, but it's better the nothing while you get your house back in order.
Or have your backup software do some MD5 checksums to verify data integrity.
Weekly backups lol. You're doing it wrong.
Asteroids are possible, but what's far more likely are:
* Geopolitical blockades
* Industrial espionage
* Mass hacking of a 0day, potentially corrupting everything
* Destruction/offlining of two data centres by separate events (Christchurch's backup emergency DC was in Brisbane, which flooded)
If you don't have some offline tape backups nearby, you're just as fucked.
Ps: MD5? For enterprise?! Really?!?!
data sovereignty laws prevents any Australian citizen related data being 'on the cloud'. So the only cloud stuff ATO is thinking about is for the home page - generic infomation, none of the tax data or processing. which in the scheme of things is a tiny portion of the data made unavailable by the SAN outage.
Shouldn't their SAN's have some snapshotting ability?
So if it goes rogue at some point, just revert to yesterdays snapshot all good, no need to have a week long outage.
doesn't help if they're writing corrupt data - for who knows how long.
backups are ALWAYS required. for when everything turns to shit (like it has here) you need something safe to go back to, even if it's old/slow to do so.
SANs, etc. are just add redundancy - and they did a good job of making the corrupt data redundant too, not a backup.
AWS is certified and is being used by Australian Government.
Only for limited uses (hosting websites), again nothing national security or privacy related will be on there.
there's also 'clouds' run by the likes of Fujitsu and others that have government certifications, but only for limited use.
Anything requiring any security is still hosted in-house.
Whole bunch of unclassified shit there.
AKA fuck all.
As I said, nothing that isn't public already.
thinking ATO could just 'flick the switch' on their in-progress cloud migration would've got the home page running (maybe). that's it.
mmm... merely reinforcing your statement
Yeah, don't worry about testing them. Put complete faith in someone else.
Please let me know how you would like to me backup my >10PB tape library to the 'cloud'.
How would you like me to restore it when I need it... you know like fast... after an outage.. or when I have no internet links... but my business still needs to fucking operate? Like a natural disaster (hellloooo queensland floods).
Tape is not dead, and your comments just reinforce that people that say it is don't know what they are talking about.
Sorry, neither of you are fully correct. Unclassified DLM, allows for... DLM's. Which includes FOUO and Sensitive:*
Which includes Sensitiveersonal e.g. P.II. - thus personal data is allowed on AWS et al.
The independent review doesn't bode well for HPES
blah blah more .... to hit 10 characters...
I've worked with some very big companies who backup to the cloud and have data in the PB range... it's easy when your primary data is already in the cloud / in a DC. You can get a 10G peering link for $500 a month, high bandwidth isn't an issue these days.
Interesting that you raise the QLD floods as a downside, it was a massive boom for IaaS and DC providers. The DC we work with ran 24/7 for many days simply to cope with the amount of kit coming in. With the amount of buildings without power (and so many who had substations in basements), the move to a DC or cloud provider who had real redundancy and proper planning was massive.
Old school IT and big enterprises who change at a rate which makes government look efficient will aways be tape. There is a big shift to disk, unless you run HPE storage that is
500/month for 10G in Australia? 500 barely buys 10G hardware...
Maybe a cross-connect in a DC.
You have weasle words there (i spend too long reading legal documents...) They have PB data and backup to the cloud.. do they backup all their PB's of data to the cloud though? Do none of those companies use tape? If they still have tape...
You are right link capacity may not be an issue - until you need a large restore ( a week to transfer your data over 10GB isn't good a recovery time )
When you need to work, and you need your data, its not helpful in a DC you cant connect to - hence floods, or earthquake example.
Yep 10Gb/s connect is cheap on AWS, which doesn't include transit (outbound).. AWS is like 5c per GB, so $50K just to transfer the data back, once. Azure is significantly more expensive.
Wouldn't it be funny if AWS Glacier was backed by tape.. or optical and not disk........
What kit do I wheel in to the DC? I don't own any backup kit anymore... because its in the cloud. I can't really pop down to MSY and buy a few thousand disks and fling them on the ute. Which after i configure it all.. which i can't connect to from the primary sites...
Never debated there isn't a shift - there certainly is. I debated that tape is dead. Its dying, but far, far from dead - and 'cloud' is not a drop in replacement.