Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.
i guess your job isnt just designing office networks.
That's actually a great response that helps me see where you're coming from, and I'm going to let that line of thought ferment for a while to see if it keeps making sense.
I haven't heard of any real world exploitation of any speculative execution vulnerability. Not to say it it hasn't happened, but it's damn hard to pull these things off.
Secondly, the only scenario that these attacks make any sense (to me) is to sneak information across a sandbox boundary in a shared host. If you own all the servers and all the systems setup on them, then speculative execution isn't an issue for you. If there is malicious code running on your stuff, speculative executing isn't what they are going to use.
If you are a cloud service provider and are allowing 3rd party users to run their own code on your servers, that is when you are vulnerable. I don't need my home lab robbed of its CPU performance to fix vulnerabilities that don't apply to it. Big finance probably doesn't need its CPU performance taken away either, if they run their own dedicated server infrastructure; or if it is 3rd party hosted, that they are allocated dedicated machines for them.
OK, so is your business not applying these patches in order to keep performance up?
I know zero businesses who haven't put these patches in. Regardless of "speculation", the patches come down the pipeline supplied by our vendors, and we install them.
I repeat what I've said a few times now - the technical masturbation of this is where most "nerd discussions" end up on the topic. And that's fine, this is a nerd forum, and we're all nerds. But the fact remains that for the current model of an average computer and/or compute node - a general purpose CPU that runs 100% of things, from OS to userspace and back - our constant drive for performance has seen necessary complexity increases, which have led obviously to security risks (because all growth in complexity brings a drop in security, because that is a fact of life even outside of computers - this has been a discussion point for philosophers since the dawn of written history).
And again, none of that is incompatible with the idea that we move the general purpose "traffic cop" tasks an OS needs to do somewhere else, and leave the bitchinfast things the userspace applications need to do locked up in a cage where all it can do is destroy itself, and not take out the whole OS and everything else running on it with it.
There are far more examples than "cloud" where the userspace applications of more than one person run concurrently.
You don't have to. You could have a smaller CPU doing security-critical management tasks, and a bigger CPU doing the grunty stuff - driving your storage, games, whatever.
This architecture isn't unique, new or mind blowing. We've see this across dozens of different types of devices old and new (hell, my "console baby" PlayStation does this - more for power reasons than security, albeit). Just not in the modern home PC yet.
I see the point to having separate processors for user land and kernel/security land. I think that there's enough demarcation to make it quite possible and fairly transparent to most people.
To me it makes sense. The intel management engine / amd st is pretty much a small cpu which does nefarious things for the NSA or whatevs. It's not allegedly accessible by the OS, but the idea of having a separate processor for security or privileged tasks has been implemented to at least some level for a decade in consumer equipment.
It's not perfect, and it never will be, but having that dedicated processor for secure or kernel tasks means you can have a sloppy, fast and loose processor for your games ... umm sorry, your spreadsheets, which can have all the flaws in the world, but that'll stop a lot of attacks.
We kind of already have this - nobody plays games without a GPU in 2020, so it's not like you're running 100% of game code on the one and only processor in your computer.
The argument a page back was "but RDMA!". Let us not forget Nvidia acquired Mellanox this year. I foresee integrated GPU+RDMA devices in our future (and if not all in one package, certainly better integration between the two, given Nvidia's massive push into HPC), and something that dramatically reduces the need for the CPU that does more secure ring0/ring-1 things to need to do all the user-space things at the same time.
Whilst i agree in theory, who is willing to pay the performance penalty for the latency for security chip to scan OS/CPU chip given they are likely to have to reside on different physical dye's, not to mention the extra $$$ for this, and then the programming time? Shit half of the programs out there can still barely multithread properly, 20+ years after this became a pretty bog standard thing. Who is gonna make programmers separate their programs up in a fast manner?
And I agree with what you posted in theory, but here we are in 2020 with 20+ core gaming boxes and crazy complex GPUs (and exactly the same shit on the super computers I babysit which is really fucking weird when you think about it), when I remember a time when people distinctly said multi core was a waste of time and all we needed where faster CPUs.
Something something Henry Ford something faster horse something.
To be fair, we mostly have 20core gaming boxes because cpu manufacturers sucked out at making cpus go faster, so they added more cores instead. Quite a lot of software has lagged well behind the hardware it has available to it. How often do you stare at one core maxed out while the other 7+ sit there doing nothing, or watch a video render away maxing out all your cores, but not utilising the GPU at all?
Hard to say someone "sucks" at something because they can't defy the laws of physics. But moving on...
My point being that there was great angst at the time that the "difficulty of programming for multi core and specialist GPUs was all too hard", and then we got over it.
So when I say "hey, let's stop using powerful-but-insecure CPUs for the main OS, split that workload out to simple-but-secure CPUs instead, and deal with clever software to get the userspace stuff that needs grunt onto the powerful-but-insecure CPU" and the response is "yeah but it's too hard/expensive"...
...my response is "we've been here before, and we did it, and we got over it".
Now, as you rightly pointed out, we got here not because "it was a better way of doing it", but because we had no choice. Will the ever increasing complexity of x86 and the burden of constant hardware flaws push us towards a similar path? I don't fucking know. That's one of those "how much bullshit will the world tolerate before change is forced" questions, to which the answer is usually far more bullshit than I expect (because I already reached my limited years ago).
One thing I do know is that no architecture lasts forever, and x86 as the thing that drives your OS (whether or not it lives on to do other things) will die one day. Whether that upper bounds is performance or security, I dunno. But in the meantime, I know I get paid a lot to put software patches on machines with hardware flaws, so the whole comical farce is funding my early retirement anyway.
You OK dude?
I've probably just stepped into the middle of an argument that had more feeling to it than I realised, but my point wasn't really to disagree with anything said. Rather, throw in my 2c on how the situation we're in right now including the ongoing security patching funding your retirement came largely of the failures of, mostly intel, to reach the lofty 10ghz goal of the netburst architecture. It ultimately lead to our current squillion core cpus, and gave us the joy that is speculative execution, all in the pursuit of performance that was unable to be attained out of single core clock speed increases.
the side-channel mitigations really only have a big penalty for heavily serialisable tasks anyway, so you can either elect to turn them off or pass the workload to an ASIC designed for that task. Plus the 'lost' performance was only ever on the table to begin with because Intel were slap-dash with security.
In a newsletter email today. New analogy
I'm going to need you to shoehorn that into a car analogy before I'll get it.
Yeah and like all PaaS there is no "SLA reporting mechanism that is accurate" ™ (pending) power BAK .
FIXED -old man memory lapse
i like pizza?
bouncing back a couple days, can now (soon) buy a RISC-V based SBC that looks useful enough to be a PC replacement - tad pricey though.
also please show a microservices version
Apologies mistaken identity by old man memory lapse.
NAT slipstreaming interesting...
NAT Slipstreaming exploits the user's browser in conjunction with the Application Level Gateway (ALG) connection tracking mechanism built into NATs, routers, and firewalls by chaining internal IP extraction via timing attack or WebRTC, automated remote MTU and IP fragmentation discovery, TCP packet size massaging, TURN authentication misuse, precise packet boundary control, and protocol confusion through browser abuse. As it's the NAT or firewall that opens the destination port, this bypasses any browser-based port restrictions.
This attack takes advantage of arbitrary control of the data portion of some TCP and UDP packets without including HTTP or other headers; the attack performs this new packet injection technique across all major modern (and older) browsers, and is a modernized version to my original NAT Pinning technique from 2010 (presented at DEFCON 18 + Black Hat 2010). Additionally, new techniques for local IP address discovery are included.
This attack requires the NAT/firewall to support ALG (Application Level Gateways), which are mandatory for protocols that can use multiple ports (control channel + data channel) such as SIP and H323 (VoIP protocols), FTP, IRC DCC, etc.