Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.
That's a huge assumption....that they have in fact learned anything.
they learnt that its the nerds' fault, goddamn nerds, what do we pay them for.
Well look, if the nerds are sitting around saying shit like "not my problem, HR problem", and then not engaging with, talking to or advising HR, then it is in part the nerds' fault.
These hard demarcation lines people choose for themselves don't help. Companies work together to achieve things, and silly tribes inside them don't solve cross-department problems.
Two things stagger me. Firstly that a company of that size and value who use cloud services integration as big selling point of their hardware can be so internally haphazard that an attack can decimate them so.
Secondly that their share price has held up that well in the face of gross incompetence.
All they do is complain that the Pentium2 domain webserver and tplink router are not good enough. Have you heard the budget they want to get IT up to half arsing things?
Doesn't surprise me - some of the charges we have argued in the past were services we never agreed to (found out it was signed up in store by an employees wife, no intention of signing it up for the business, only business details given were her husbands mobile number - noting he's just an employee, no rights on the account...T's answer was that it shouldn't have been able to happen, but they have "sufficient" protection in place to prevent it happening...despite it happening), services that were cancelled yet were still billing (that took 1.5yrs to be resolved), services they hadn't yet delivered yet started billing on (and a current argument over a service that's still yet to be delivered that we've been billed for around a year on) and even a service that we're still getting billed on in a telstra DC they demolished at the start of the year.....
Completely understand (and largely agree with) your skepticism, but there is a somewhat remote chance that they do have adequate offline/read only backups etc and the delays in recovery are more around ensuring that they find and close all holes that allowed the attackers entry in the first place before they flip the attackers the bird. Might also be a case of ensuring they can recover all data before making a big deal of it, as an attacker knowing you've exhausted your other options already may be more likely to ask for more.
Looking at the details around the ransomware used in their case it appears to be a far more sophisticated attack, not your usual scan for open RDP and pwn type attack.
Sometimes, I quietly say a little prayer before bed that when I wake up you'll be my companies new CEO and everything is going to be OK from now on.
Garmin are still down.
I'd hazard a guess that the one Garmin IT neck-beard who was in charge of backups was also tasked with sales approx 3 years ago, and didn't tell the desktop support apprentice who replaced him about some important servers that he was the proud new owner of.
Post your links if you've got them.
Reddit says they've lost online backups but from the outage time you would have guessed that anyway.
Getting Karen to put her password into a link opened from Invoice.pdf isn't "far more sophisticated"
it is however "far more successful"
Well, that would have been step one. Or a USB stick in the parking lot labelled "awesome horse pr0n".
That gets you in. Then you spend a while searching around to see whats what and trying commands and maybe an ATM one state over goes berserk and spits out a heap of cash...
Traversal between fitness, marine and aviation divisions? WTF? *facepalm*
Can't wait for the PIR for this one
root cause: all our systems rely on this one WinXP SP1 box running an Access database that no one knows how it works any more.
According to this thread:
Additional info > here WastedLocker crypto by the Russians.
Seems they forgot to do backups again. It's pretty simple stuff.
It will be interesting to know how it got in if they ever release it. What I would like to know is who though it was a good idea to have web based services on the same direct accessible network as the office stuff. Basic network design should have separated the networks with access restricted to the use of jump hosts or super locked down VPN connections that are not always on.
You know what's rad? ZFS read only snapshots. Not even root can encrypt them.
You know what's not rad? Businesses that don't use read only snapshots.
I'd guess someone has pulled the plug on the whole thing while they roll incident response.
This type of response just make me sad with the state of IT. This is a company with the resources and size that should have had the customer facing infrastructure in an auto scaling environment with no direct access to the customer data from any office, and if it did get hit you blow the whole lot away, restore the DB and let the auto scaling servers reinstate from image, worst case you loose a couple of minutes of data and have a downtime of only an hour or so.
If they did it right in the first place they would be just restoring back office stuff and the customers would only have had a short period of lower end user support while they rebuilt the network from backups and switched to their DR planned off site location. Instead we see people doing panicked hard shutdowns of servers and data centers.
you know what's rad? Using rad in 2020.
"But we've always done it this way".
"It's been a slow transition".
"We couldn't change all systems because of legacy LOB applications and business requirements".
"It's a HR problem not a tech problem".
"We followed best practice and engaged with magic quadrant vendors".
Blah blah blah... I'm sure all the excuses were there.
(Given the last-century business models and excuses on display, why not use last century language to match?)
Someone at Garmin needs to read some wikipedia.
Edit: yes yes I know off site != off line but who in the hell backs up to the same site as production? I mean... apart from Garmin?