Ok who broke rsync at the ATO?

Discussion in 'Business & Enterprise Computing' started by link1896, Dec 14, 2016.

  1. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    Rubbish :) Sun was far, far more than just Solaris / Sparc. At best, under Oracle the best they've done is kept their head above water.

    Sun had very impressive hardware (x86), MySQL and of course Fishworks which had developed things such as dtrace and zfs. Both are still incredibly impressive to this day. ZFS has been virtually stagnant since they sold, which shows you what Oracle did with it.

    Oracle post acquisition could't keep Jonathan Schwartz around for long, nor Bryan Cantrill and many other of the top engineers at Sun. They essentially gutted it and tried to make a quick return rather than continuing with the long term vision of Sun. Any engineer who knew what they were doing jumped ship as soon as possible.

    A shame they sold out, I have no doubt that they'd be in the top 5 software / hardware vendors with some decent leadership.
     
  2. IACSecurity

    IACSecurity Member

    Joined:
    Jul 11, 2008
    Messages:
    756
    Location:
    ork.sg
    I love the tape is dead comments, heard them for 10 years if not more. Though I expect in 10 years more it will be right..

    What do you consider 'big'? Big AU, or Big Global? Or Internet giants like Google, Facebook, Amazon?
     
  3. JumpingJack

    JumpingJack Member

    Joined:
    Jun 16, 2002
    Messages:
    274
    Someone blaming someone. I can't mention whom

    I think its the company in the video or the fact ATO uses old servers because HP lost the contract
    Hmm my understanding HPE lost the contract or atleast for Desktop/Onsite infrastructure. They were kicked out

    http://www.computerworld.com.au/art...tin_wins_60m_ato_end-user_computing_contract/

    The Onsite support was then subcontracted to another company CSG. Which now owned by NEC Australia

    NEC Australia partners with HPE. While HP might entrusted to look after services behind the scenes
    and NEC IT Services Australia took home the HPs Software Cloud Partner of the Year 2015 gong.

    Lets be honest here

    You show upto work as CIO and see this shit go down. You would escape out the window

    If something is borked on your watch or gets hacked. Just leave.. You've lost my confidence and time
    Scrub that employer off your resume and don't associate with them

    Moral of the post is. Don't truck large companies to subcontract work on someone else that complete clusterfuck to explain
    THEN wonder why you lost 14PB of data
     
    Last edited: Dec 18, 2016
  4. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    I'm seeing the change in the 5M+ companies already.

    Tape will always exist in the same way that Cisco will always exist. Many companies will always play it safe and stick to "traditional" ways, despite the fact the world is changing around them.

    We're seeing anyone with 20TB+ flicking from D2D2T to D2D2C / D2D2D for backups. Running D2D for backups has been prevalent for a while and tapes saw a small resurgence because of this. They were "dead", then resurrected for a while (simply due to cost), but with the cost of the SMR drives now presenting a viable option which is much faster and cheaper many are moving on from tape.
     
  5. GreenBeret

    GreenBeret Member

    Joined:
    Dec 31, 2001
    Messages:
    19,370
    Location:
    Melbourne
    Eh? There are plenty of Openstack entahprise workloads these days. It's huge in telecoms in China and Europe. We even have HPC workload on Openstack / Ceph. In one of our zones, powered by Ceph, the avg core utilisation has been in the high 80 low 90% this year. Most so-called enterprise VM workloads are in the teens.

    We have 4 different clustered storage techs here, managed by a small team and Ceph is the easiest to deal with. Even when disks or nodes fall over, nothing bad happens save for some performance degradation for end users. We have a couple of PBs on Ceph now, and will probably triple that next year.
     
  6. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    39,681
    Location:
    Brisbane
    Yeah, look, the lines are being blurred more and more these days. I once considered "enterprise" loads higher IO for longer periods (think large database queries), compared to web servers which did a lot less of that. But these days it's all flipped around, depending on the use case.

    Where I am now, we can't virtualise much of our main workload, because we're so CPU bound. auxiliary services are 100% virtualised, but anything that's our main number crunching stays on bare metal.
     
  7. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    From the "cloud" provider perspective, more IOPS and lower latency = better density = better profitability. Being able to sustain over 100,000 IOPS in a small cluster is quite easy to do with SSD accelleration / all SSD arrays.

    You need to try containers :) Virtually nil overhead with all of the benefits of virtualisation. While Docker is the new kid on the block, OS level containerisation is over 15 years old now.
     
  8. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    39,681
    Location:
    Brisbane
    Nope, not for us. We've got well established clustered bare metal job control tools that do a bloody good job of dealing with resource management. And quite literally even 1% overhead is a loss to us, when we're looking at months worth of rendering per job, where a single node being down for even a few hours is a measurable loss.
     
  9. Myne_h

    Myne_h Member

    Joined:
    Feb 27, 2002
    Messages:
    9,941
    Did you and your people ever try out those xeon phi's?
     
  10. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    In terms of CPU overhead, there is none for containers. They're just cgroup isolated processes, which means you get bare metal performance. There is some network (ie the same as a few extra iptables rules) and potential i/o loss (very situation dependant) but you're talking a few microseconds. If there are multiple instances of the same system, it's easy to do memory / i/o dedupe too.

    On the other hand you get dynamic resource allocation, backups, live migration and instant portability across different hardware for upgrades / DR. Just an option anyway :)
     
  11. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    39,681
    Location:
    Brisbane
    All of our licensing models (both for the cluster manager and trending tools) are charged per host. One host rendering 4 separate processes only counts as a single license. Breaking hosts up with containers would end up blowing our licensing costs out. If it weren't for that, I would definitely try it, just for some memory management benefits (boxes OOMing is our number 1 production issue).
     
  12. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,751
    Location:
    Sydney
    ATo website giving me errors lol

    Docker is a convenience tool for those who cant write their own Docker. There is no such thing as instant portability in HPC. memory and fpu architecture matters to your algorithms. If you use off the shelf HPC software, perhaps its ok. They are getting better and better. But to be honest i dont trust them (yet).
     
    Last edited: Dec 21, 2016
  13. wintermute000

    wintermute000 Member

    Joined:
    Jan 23, 2011
    Messages:
    2,140
    Excellent trolling
     
  14. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    I felt a nibble on the line, but then got distracted writing my own programming language. I'm fairly sure OOP is just for those who can't write their own language....
     
  15. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,751
    Location:
    Sydney
    you joined in 2011 you cant be old enough to know what is trolling and what is over your head.

    no, comfort is fine when performance doesnt matter.

    http://www.infoworld.com/article/30...celeration-possible-in-docker-containers.html

    you can see only recently Docker support gpus. so if you made the case any time in the last 3 years you would get the same answer from me. but as i said in my last post, if you wait long enough, they will plug holes. nearly always though you cant wait 3 years to do what you need to do.

    im sure there are plenty of gotchas with Docker. if you can control the whole pipeline there are no surprises.
     
    Last edited: Dec 21, 2016
  16. bcann

    bcann Member

    Joined:
    Feb 26, 2006
    Messages:
    5,802
    Location:
    NSW
    SMR for backups? For a long rotation period or storage period (IE 7 Years or more) Personally i think your crazy. Whilst this might work for SMB, there is no way it is:

    A) Human proof (Humans just chuck crap into their bags)
    B) Storage proof for prolonged time (IE Years)
    C) Interface proof -- i imagine in 7 years time we will be on .. what USB 5 or so by then, probably a different cable/interface.
    D) Reliable -- SMR has been around for what 2 years, nobody has any real life expectancy for that stuff for longer periods of time.

    Sorry but tape still works for me for many reasons, especially over prolonged time periods that we as a business are required to have (Min 7 Years)
     
  17. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    I treat hard drives the same way as I treat servers, they're rotated. By having the data in cold storage or semi-offline, you're able to validate it constantly as well. If you ever need to retrieve a tape which is 7 years old and you haven't validated the data then it's still a gamble anyway.
     
  18. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    17,784
    Location:
    Canberra
    i fall in the same vein as bcann wrt to SMR.

    We know tape works. 40+ years on.

    We definitely can't say that about SMR drives. And Tape shits on it from a performance point of view when you're talking block transfers anyway
     
  19. Daemon

    Daemon Member

    Joined:
    Jun 27, 2001
    Messages:
    5,469
    Location:
    qld.au
    It's possibly a different mindset shift compared to tape, but I don't expect drives containing backups to be in use for any more than 5 years. Drives are added to storage pools and rotated as they fail / become old / serve their useful life. New ones come out, the system replicates the data and old ones are removed.

    I still go by the old rule of "important data in 3 locations on 3 differing systems at any one time" rule anyway, so I'm certainly not advocating a single drive stored in a cupboard as a valid alternative to tape.
     
  20. s.Neo

    s.Neo Member

    Joined:
    Oct 23, 2002
    Messages:
    398
    Location:
    Darwin, NT, Australia
    Tape is dead. Point your backup software at a public cloud provider and be done with it. Let the experts that deal with infrastructure at scales most of us have never worked in deal with it. No more nail biting moments wondering if your tapes still work when you actually need them.
     

Share This Page

Advertisement: