Consolidated Business & Enterprise Computing Rant Thread

Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.

  1. wintermute000

    wintermute000 Member

    Joined:
    Jan 23, 2011
    Messages:
    1,931
    The problem with small setups is ironically because its small, no thought or attention is given to it, specialists are not employed full time, etc. so by the time it goes on fire nobody has a clue what it is, let alone how the maintenance is done or where the backups are etc.
    IaaS doesn't solve this, its no different to people hosing their web hosting / DNS because their web 'developer' told them to give them admin lol
    For those setups SaaS all the way - and even if you need your own VM, it may be cheaper to go a 'VPS' or a bare-bones provider like digital ocean.

    re: Openstack, you're the second person I've met who says its not that bad (in fact the first handles his own stack by himself, but he's a gun and he's also the solo master of his small domain - he did it more for kicks, could have easily lived on RHEV or VMware forever lol). Maybe someone who has contacts with the unis could shed some light as they're pretty big users I hear. Don't get me wrong its not a 'bad' product - it just requires a lot of watering and feeding by specialists, its not an off the shelf product and its overkill for most 'normal' enterprises although I dunno what happens if you stick purely with a packaged solution like Mirantis etc. Then again its not my domain so maybe I'm talking out of my arse, but its what I've been told by more than a few people, and matches my own observations whenever I've had a stab at tinkering with it.
     
  2. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,640
    Location:
    Canberra
    What the actual fuck? You want to back the horse that;

    1. Despite being first (really - shutup Daemon ) to market with HCI, has less install base than VSAN (Which has only really been usable the last 2 versions - and is dear as poison for existing vmware customers, requiring both significant licensing costs as well as a hardware refresh)
    2. Lost 1/3rd of its share value last quarter (and has never turned a profit)
    3. Muddles its feature set between planned features, beta features and reality
    4. Has sales people that deliberately underspec solutions when going head to head, lie to customers that it will meet their requirements (hint, it won't) all with the knowledge that they will need to buy more nodes to meet those requirements (or significantly increase caching tiers) before the end of implementation to meet performance

    Nutanix had a place - but the other two hypervisor players in the market now do it all natively, better. Either from a throughput perspective (Hyper-V S2D) or from both a throughtput and management perspective (VSAN).

    Nutanix is basically a stroke of a pen away from being deleted from the market. All VMWare has to do is say "VSAN is now free with Enterprise Plus" or similar. And its gone. Overnight. VMWare hurt themselves by effectively charging Enterprise Plus again, per cpu, for VSAN.

    The only thing Nutanix is good at in 2019 is their pre-sales process. What people want is a Nutanix-like hardware experience, with a vCenter front-end. To compete on price, they have to exclude vSphere licensing entirely.

    So those guys have multiple problems.

    1. They budget/execute IT capital spend on absolute necessity. Typically every 5 years, sometimes even 7.
    2. They don't actually know what they want
    3. They struggle, but are familiar enough with $lobshitware that changing is basically suicide.
    4. Any ongoing expenses are literally satan incarnate

    The cloud fucks out because of all four of these reasons. The cloud is *never* cheaper. The interface is either radically different or doesn't have feature parity - and they aren't prepared to learn/change.

    All of my successes in shifting medium sized business (under 100 seats, $10m revenue) involved *significantly* increased spend, with a mid-term view on ROI. Ultimately these decisions will enable them to grow and get larger, scaling linearly from now on. But all of that is a bet - that the rest of their business will perform and keep up with it.

    The hybrid model is more and more becoming the solution for < 100 seats/$10m - because the cloud *doesn't* manage itself. Both Azure and AWS (As well as O365) actually require someone *more* skilled than the average MS Small business server operator, not less. And god help the average "computer guy"/msp - because they are all fucking terrible at SBS, and forcing O365/Azure on them when;

    a) they think its putting them out of a job (rather than the reality being its going to increase their service revenue)
    b) they really shouldn't be in IT anyway and learning anything new might as well be rocket surgery

    will almost always end in tears.

    Software/Application is everything. Work out what you're going to use, then make a decision on platform/hosting based on that. Cloud, Hybrid, On-premise its all fucking irrelevant and still bullshit infrastructure talk which isn't business goal orientated. Talk about the application that increases efficiency and capability - then work out what you need to do to make that sing, whilst making sure it can talk to other stuff in the future.
     
    Last edited: Apr 14, 2019
    2SHY and Perko like this.
  3. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,640
    Location:
    Canberra
    meanwhile anecdotes of OpenStack being terrible, rooted from EDU deployments are hilarious. Of course they are, EDU somehow challenges Public Sector for worst possible talent in infrastructure design and implementation (or even requirements gathering to get a vendor to do it).

    OpenStack is as good/bad as you make it. Do stupid shit, get bad results - I am jacks complete lack of surprise.
     
    Daemon likes this.
  4. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    37,016
    Location:
    Brisbane
    And security. I *still* work with people who think "chmod -R 777" is an industry standard fix for permissions issues. I was literally screaming at a new "expert" employee we hired just last week who I was assured was a top ranking dude from a major studio prior to landing on our doorstep, and so far he's done nothing but open us wide up to serious risk.

    Our business leadership keep asking about the cloud, and I keep telling them I need more than 3 people in the whole org to give a shit about the security process we paid a lot of money to be measured against if they want to pursue that path.
     
  5. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,451
    Location:
    qld.au
    Horse shit.Rackspace _is_ OpenStack... guess where it started. They're still running it on hundreds of thousands of servers.

    If you think HPe or Cisco are the pinnacle of progression nor ability to monetise products.... you'd hate to look too close. ea how good or successful it is.

    Most don't comprehend Openstack... and this is why it's never taken off. It's not a product, but a platform. It's akin to saying you're going to deploy "IBM", but to one person that'll mean something completely different to someone else. You need about 6 core services to deliver a working Openstack system. You can then vary some of these from one server to geographically diverse clusters of thousands.

    I've deployed Openstack 3 times, all via orchestration and automation.

    It's all patched... at least all known variants of anyway.
    [​IMG]
    Agree. They had crazy pricing but Nutanix came to the market (not quite first... :p) with the best marketed HCI system which used SSD caching. However, they forgot to innovate again and became stagnant and expensive.... two things which aren't great for longevity.
    Cloud is never cheaper if you deploy like-for-like and don't cost risk nor scale. Push pets to the cloud and you have someone else housing your pets. PSA: Don't put pets in the cloud.

    I dealt with a customer recently who's "IT guy" took 6 weeks to migrate 5 mailboxes to O365 (already with a different provider ~ 15GB of email) ... and fucked it up. Some of these people are IT guys because they're the most competent at switching a computer on, but don't comprehend it at all.

    Yup. I'm amazed EDU could even spell OpenStack, yet alone figure out they need servers to run it on.

    Meanwhile, the latest release in March included 150 different organisations and over 1,400 individual contributors. OpenStack isn't dead, it's just not for simpleton IT people.
     
    Last edited: Apr 14, 2019
  6. theSeekerr

    theSeekerr Member

    Joined:
    Jan 19, 2010
    Messages:
    2,910
    Location:
    Prospect SA
    Exactly. We've worked with customers to spec up IaaS solutions for our software - invariably the running costs for the first year are higher than the total cost of licensing and hardware for on-premises (for the usual 5-7 year period this stuff gets purchased for at this scale)

    Such are the joys of being a $LOBSHITWARE vendor. The architecture is such that you don't get to start treating any server like cattle until you hit >100 concurrent users, and those people have the IT resources to make these decisions on their own. It's the <20 user space that causes headaches - they want to buy a SaaS solution and they want it to cost less than $500 per head per year when realistically the hardware alone (before we get into SPLA licensing and whatnot) is around ~$1k per month.
     
    NSanity likes this.
  7. Urbansprawl

    Urbansprawl Member

    Joined:
    May 5, 2003
    Messages:
    542
    I've worked with/for two large telcos/vendors who tried to build large scale Openstack IaaS offers. Agree that the technology was not the challenge (although some components like Neutron were not suited a multi tenant environment). The challenges were more organisational and cultural:
    • Not enough developers. These orgs think they are doing well by having 30 or 100 developers in house on a project. AWS etc have thousands and some of the best development practices in the world. Openstack does not replace the need for those people. Even worse you need these people in house and not outsourced, which is hostile to the current telco/vendor mindset of shift everything offshore.
    • No culture of rapid release. AWS and Azure release cool stuff every week, these guys still believe in once a quarter or twice a year.
    • No culture of rapid service change. You pointed this one out. Most of the big orgs who tried Openstack don't have rigorous change management or architectural vision which enables them to even handle the amount of testing and patching you need to keep up with Openstack releases. Again, all the organisational knowledge of how to do large scale development has been outsourced.
    • Wrong service management culture. One org wouldn't do a forced reboot of a hypervisor running customer workloads, even during the contracted change window because they thought this was bad customer service. The platform is now hopelessly out of date. Meanwhile AWS sends you an notification that your server is getting turned off at this time and date, no permission needed. This is how you keep your platform up to date, and force customers not to treat your cloud like a big VMware farm.
    Even companies like Openstack without the cultural burden just couldn't get enough scale to keep up with the big guys.

    The successes I've seen for corporate openstack were in development environments where smart people could run the platform and didn't need to worry about the broader environment.
     
  8. itsmydamnation

    itsmydamnation Member

    Joined:
    Apr 30, 2003
    Messages:
    10,369
    Location:
    Canberra
    no its not and your not understanding the problem.....
     
  9. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,640
    Location:
    Canberra
    I mean he does. And we'll you're both right.

    Spec ex is a good 4-7 years away from being fixed.
     
  10. itsmydamnation

    itsmydamnation Member

    Joined:
    Apr 30, 2003
    Messages:
    10,369
    Location:
    Canberra
    No you can't "fix" it, unless you can reset the entire state of the machine after a failed prediction it will be open for exploit....

    Have you even looked at some of the programing advice post "side band" attack hitting the wild, yuo can no long use flow control to secure data ( if , else , while etc) you can only use arithmetic .

    so
    int checked_get(size_t len, size_t index, int *array){
    if (index < len) return array[index];
    }
    is out and now tests like

    int checked_get(size_t len, size_t index, int *array){
    //make bitmask. works if len is a power of two
    size_t mask = len + len -1;
    if (index < len) return array[index & mask];
    // this way, if index is greater than len, it will speculatively return the wrong value but from inside the values you could access anyway.
    // no protection boundary is violated.
    }

    but thats just with the attacks we know about now, CPU security model is 100% broken at this point. We just dont care because we have no other choice.
     
    GumbyNoTalent likes this.
  11. 2SHY

    2SHY Member

    Joined:
    Aug 10, 2010
    Messages:
    7,566
    Location:
    Sydney NSW Australia
    [​IMG]
     
    Unframed and Daemon like this.
  12. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,451
    Location:
    qld.au
    Again, all known exploits have mitigation implemented. If they don't, please point me to a POC which runs on AWS or any other cloud provider.

    Spectre, Meltdown and Foreshadow have all had both microcode and kernel level patches to mitigate. The downside is the performance hit, not the fact that an active exploit still exists.

    Yes, we know the underlying way Intel preferenced performance over security is biting them and no doubt there will be one more exploit with a fancy name / logo again this year. However, currently there's no known way where a "neighbour" on shared infra can access your data.
     
  13. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,635
    Location:
    Sydney
    i thought they found new exploits eg L1TF. they are speculative though. as in, no known official exploits.

    but i can see how they can be used on standard software stacks. making sense of l1 cache cant be easy. but some smart people out there probably will be able to do it.

    anyway for ultimate protection do we just go back to non shared cpus for now?
     
    Last edited: Apr 14, 2019
  14. PabloEscobar

    PabloEscobar Member

    Joined:
    Jan 28, 2008
    Messages:
    13,380
    If Like-For-Like isn't cheaper, then the ROI on re-architecting entire systems to make it cheaper, needs to be reasonably quick, for it to even get a look-in.

    I was gonna make a Mitch01 reference, but you said 6 weeks, not 6 years.

    The problems that have existed ever since resource sharing was thing? Whats new here? If you 'need' to be immune to current and future spectre type bugs - dedicated hosts are still a thing - https://aws.amazon.com/ec2/dedicated-hosts/
     
  15. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,640
    Location:
    Canberra
    Savage.
     
    miicah likes this.
  16. BAK

    BAK Member

    Joined:
    Jan 7, 2005
    Messages:
    1,017
    Location:
    MornPen, VIC
    While we're talking industry standard fixes, don't forget "setenforce 0"...
     
    2SHY and NSanity like this.
  17. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,640
    Location:
    Canberra
    Add to group "domain admins"
     
    2SHY and olie like this.
  18. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    12,194
    Location:
    Canberra
    what? you had selinux on to begin with?
     
    BAK and NSanity like this.
  19. EvilGenius

    EvilGenius Member

    Joined:
    Apr 26, 2005
    Messages:
    10,296
    Location:
    Rocky
    Patches to mitigate yes, but researched released in February found current mitigations to be inadequate, stating "all processors that perform speculative execution will always remain susceptible to various side-channel attacks, despite mitigations that may be discovered in future."

    https://www.extremetech.com/computi...permanently-haunted-by-spectre-security-flaws

    https://hardware.slashdot.org/story...ftware-alone-cant-mitigate-spectre-chip-flaws

    https://arxiv.org/abs/1902.05178
     
    GumbyNoTalent likes this.
  20. GumbyNoTalent

    GumbyNoTalent Member

    Joined:
    Jan 8, 2003
    Messages:
    7,528
    Location:
    Briz Vegas
    I wonder what the likelihood of exploits working on short lived execution systems like AWS Lambda where the entire environment is short lived then "destroyed" whether an attack could be launched that could exploit the host quick enough? Has been my thinking the whole time since this hit the wild that short lived execution systems would be the most robust against such attacks.
     

Share This Page

Advertisement: