Consolidated Business & Enterprise Computing Rant Thread

Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.

  1. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,981
    Location:
    Sydney
    i guess your job isnt just designing office networks.
     
  2. theSeekerr

    theSeekerr Member

    Joined:
    Jan 19, 2010
    Messages:
    3,532
    Location:
    Broadview SA
    That's actually a great response that helps me see where you're coming from, and I'm going to let that line of thought ferment for a while to see if it keeps making sense.
     
    millsy and Rass like this.
  3. dakiller

    dakiller (Oscillating & Impeding)

    Joined:
    Jun 27, 2001
    Messages:
    8,220
    Location:
    3844
    I haven't heard of any real world exploitation of any speculative execution vulnerability. Not to say it it hasn't happened, but it's damn hard to pull these things off.

    Secondly, the only scenario that these attacks make any sense (to me) is to sneak information across a sandbox boundary in a shared host. If you own all the servers and all the systems setup on them, then speculative execution isn't an issue for you. If there is malicious code running on your stuff, speculative executing isn't what they are going to use.

    If you are a cloud service provider and are allowing 3rd party users to run their own code on your servers, that is when you are vulnerable. I don't need my home lab robbed of its CPU performance to fix vulnerabilities that don't apply to it. Big finance probably doesn't need its CPU performance taken away either, if they run their own dedicated server infrastructure; or if it is 3rd party hosted, that they are allocated dedicated machines for them.
     
  4. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    42,806
    Location:
    Brisbane
    OK, so is your business not applying these patches in order to keep performance up?

    I know zero businesses who haven't put these patches in. Regardless of "speculation", the patches come down the pipeline supplied by our vendors, and we install them.

    I repeat what I've said a few times now - the technical masturbation of this is where most "nerd discussions" end up on the topic. And that's fine, this is a nerd forum, and we're all nerds. But the fact remains that for the current model of an average computer and/or compute node - a general purpose CPU that runs 100% of things, from OS to userspace and back - our constant drive for performance has seen necessary complexity increases, which have led obviously to security risks (because all growth in complexity brings a drop in security, because that is a fact of life even outside of computers - this has been a discussion point for philosophers since the dawn of written history).

    And again, none of that is incompatible with the idea that we move the general purpose "traffic cop" tasks an OS needs to do somewhere else, and leave the bitchinfast things the userspace applications need to do locked up in a cage where all it can do is destroy itself, and not take out the whole OS and everything else running on it with it.

    There are far more examples than "cloud" where the userspace applications of more than one person run concurrently.

    You don't have to. You could have a smaller CPU doing security-critical management tasks, and a bigger CPU doing the grunty stuff - driving your storage, games, whatever.

    This architecture isn't unique, new or mind blowing. We've see this across dozens of different types of devices old and new (hell, my "console baby" PlayStation does this - more for power reasons than security, albeit). Just not in the modern home PC yet.
     
    Last edited: Oct 30, 2020
  5. Rass

    Rass Member

    Joined:
    Jun 27, 2001
    Messages:
    3,089
    Location:
    Brizbekistan
    I see the point to having separate processors for user land and kernel/security land. I think that there's enough demarcation to make it quite possible and fairly transparent to most people.

    To me it makes sense. The intel management engine / amd st is pretty much a small cpu which does nefarious things for the NSA or whatevs. It's not allegedly accessible by the OS, but the idea of having a separate processor for security or privileged tasks has been implemented to at least some level for a decade in consumer equipment.

    It's not perfect, and it never will be, but having that dedicated processor for secure or kernel tasks means you can have a sloppy, fast and loose processor for your games ... umm sorry, your spreadsheets, which can have all the flaws in the world, but that'll stop a lot of attacks.
     
  6. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    42,806
    Location:
    Brisbane
    We kind of already have this - nobody plays games without a GPU in 2020, so it's not like you're running 100% of game code on the one and only processor in your computer.

    The argument a page back was "but RDMA!". Let us not forget Nvidia acquired Mellanox this year. I foresee integrated GPU+RDMA devices in our future (and if not all in one package, certainly better integration between the two, given Nvidia's massive push into HPC), and something that dramatically reduces the need for the CPU that does more secure ring0/ring-1 things to need to do all the user-space things at the same time.
     
    Last edited: Oct 30, 2020
    Rass likes this.
  7. bcann

    bcann Member

    Joined:
    Feb 26, 2006
    Messages:
    5,997
    Location:
    NSW
    Whilst i agree in theory, who is willing to pay the performance penalty for the latency for security chip to scan OS/CPU chip given they are likely to have to reside on different physical dye's, not to mention the extra $$$ for this, and then the programming time? Shit half of the programs out there can still barely multithread properly, 20+ years after this became a pretty bog standard thing. Who is gonna make programmers separate their programs up in a fast manner?
     
  8. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    42,806
    Location:
    Brisbane
    And I agree with what you posted in theory, but here we are in 2020 with 20+ core gaming boxes and crazy complex GPUs (and exactly the same shit on the super computers I babysit which is really fucking weird when you think about it), when I remember a time when people distinctly said multi core was a waste of time and all we needed where faster CPUs.

    Something something Henry Ford something faster horse something.
     
  9. EvilGenius

    EvilGenius Member

    Joined:
    Apr 26, 2005
    Messages:
    10,659
    Location:
    elsewhere
    To be fair, we mostly have 20core gaming boxes because cpu manufacturers sucked out at making cpus go faster, so they added more cores instead. Quite a lot of software has lagged well behind the hardware it has available to it. How often do you stare at one core maxed out while the other 7+ sit there doing nothing, or watch a video render away maxing out all your cores, but not utilising the GPU at all?
     
  10. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    42,806
    Location:
    Brisbane
    Hard to say someone "sucks" at something because they can't defy the laws of physics. But moving on...

    My point being that there was great angst at the time that the "difficulty of programming for multi core and specialist GPUs was all too hard", and then we got over it.

    So when I say "hey, let's stop using powerful-but-insecure CPUs for the main OS, split that workload out to simple-but-secure CPUs instead, and deal with clever software to get the userspace stuff that needs grunt onto the powerful-but-insecure CPU" and the response is "yeah but it's too hard/expensive"...

    *BREATHE*

    ...my response is "we've been here before, and we did it, and we got over it".

    Now, as you rightly pointed out, we got here not because "it was a better way of doing it", but because we had no choice. Will the ever increasing complexity of x86 and the burden of constant hardware flaws push us towards a similar path? I don't fucking know. That's one of those "how much bullshit will the world tolerate before change is forced" questions, to which the answer is usually far more bullshit than I expect (because I already reached my limited years ago).

    One thing I do know is that no architecture lasts forever, and x86 as the thing that drives your OS (whether or not it lives on to do other things) will die one day. Whether that upper bounds is performance or security, I dunno. But in the meantime, I know I get paid a lot to put software patches on machines with hardware flaws, so the whole comical farce is funding my early retirement anyway.
     
  11. EvilGenius

    EvilGenius Member

    Joined:
    Apr 26, 2005
    Messages:
    10,659
    Location:
    elsewhere
    You OK dude? :)

    I've probably just stepped into the middle of an argument that had more feeling to it than I realised, but my point wasn't really to disagree with anything said. Rather, throw in my 2c on how the situation we're in right now including the ongoing security patching funding your retirement came largely of the failures of, mostly intel, to reach the lofty 10ghz goal of the netburst architecture. It ultimately lead to our current squillion core cpus, and gave us the joy that is speculative execution, all in the pursuit of performance that was unable to be attained out of single core clock speed increases.
     
  12. chip

    chip Member

    Joined:
    Dec 24, 2001
    Messages:
    3,906
    Location:
    Pooraka Maccas drivethrough
    the side-channel mitigations really only have a big penalty for heavily serialisable tasks anyway, so you can either elect to turn them off or pass the workload to an ASIC designed for that task. Plus the 'lost' performance was only ever on the table to begin with because Intel were slap-dash with security.
     
  13. CptVipeR

    CptVipeR Member

    Joined:
    Jun 28, 2001
    Messages:
    832
    Location:
    Hobart
    In a newsletter email today. New analogy :)

    upload_2020-11-2_10-37-25.png
     
    3Toed and 2SHY like this.
  14. chip

    chip Member

    Joined:
    Dec 24, 2001
    Messages:
    3,906
    Location:
    Pooraka Maccas drivethrough
    I'm going to need you to shoehorn that into a car analogy before I'll get it.
     
    fredhoon and EvilGenius like this.
  15. GumbyNoTalent

    GumbyNoTalent Member

    Joined:
    Jan 8, 2003
    Messages:
    9,277
    Location:
    Briz Vegas
    Yeah and like all PaaS there is no "SLA reporting mechanism that is accurate" ™ (pending) power BAK . ;)

    FIXED -old man memory lapse
     
    Last edited: Nov 2, 2020
  16. power

    power Member

    Joined:
    Apr 20, 2002
    Messages:
    64,928
    Location:
    brisbane
    i like pizza?
     
  17. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    14,184
    Location:
    Canberra
  18. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,981
    Location:
    Sydney
    also please show a microservices version
     
  19. GumbyNoTalent

    GumbyNoTalent Member

    Joined:
    Jan 8, 2003
    Messages:
    9,277
    Location:
    Briz Vegas
    Apologies mistaken identity by old man memory lapse.
     
  20. GumbyNoTalent

    GumbyNoTalent Member

    Joined:
    Jan 8, 2003
    Messages:
    9,277
    Location:
    Briz Vegas
    NAT slipstreaming interesting...
    https://github.com/samyk/slipstream
    pinstatic.png
    NAT Slipstreaming exploits the user's browser in conjunction with the Application Level Gateway (ALG) connection tracking mechanism built into NATs, routers, and firewalls by chaining internal IP extraction via timing attack or WebRTC, automated remote MTU and IP fragmentation discovery, TCP packet size massaging, TURN authentication misuse, precise packet boundary control, and protocol confusion through browser abuse. As it's the NAT or firewall that opens the destination port, this bypasses any browser-based port restrictions.


    This attack takes advantage of arbitrary control of the data portion of some TCP and UDP packets without including HTTP or other headers; the attack performs this new packet injection technique across all major modern (and older) browsers, and is a modernized version to my original NAT Pinning technique from 2010 (presented at DEFCON 18 + Black Hat 2010). Additionally, new techniques for local IP address discovery are included.


    This attack requires the NAT/firewall to support ALG (Application Level Gateways), which are mandatory for protocols that can use multiple ports (control channel + data channel) such as SIP and H323 (VoIP protocols), FTP, IRC DCC, etc.
     
    Last edited: Nov 3, 2020

Share This Page

Advertisement: