Consolidated Business & Enterprise Computing Rant Thread

Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.

  1. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,082
    Location:
    Brisbane
    I get what he's saying. In this case though, a nice speedy interconnect between CPU, GPU and RAM (on each) isn't "specialised ASICs" or wasted effort. It'll be in supercomputers today, and video game consoles in 5-10 years.
     
  2. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,995
    Location:
    Sydney
    Needing fast interconnects means you’re doing it wrong.

    it’s like putting lipstick on a pig of an algorithm
     
    Last edited: Nov 27, 2020
  3. wintermute000

    wintermute000 Member

    Joined:
    Jan 23, 2011
    Messages:
    2,504
    You're so stupid that it's actually genius
     
    Hive likes this.
  4. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    14,388
    Location:
    Canberra
    No, stupid doesn't overflow back around to genius, there's no limit to stupid.
     
  5. PabloEscobar

    PabloEscobar Member

    Joined:
    Jan 28, 2008
    Messages:
    14,356
    Serious Question

    "Re-architect your software design so it's cloud native" is an acceptable answer for software in the current day, and quiet often the response given to 'legacy' stacks is "you're doing it wrong"

    Why is

    "Re-design your algorithm so it doesn't need fast interconnects" not acceptable to supercomputers?
     
  6. looktall

    looktall Working Class Doughnut

    Joined:
    Sep 17, 2001
    Messages:
    26,532
    What would be faster?
    A better written algorithm on hardware with slower interconnects or a poorly written algorithm on hardware with faster interconnects?
     
    Luke212 likes this.
  7. cvidler

    cvidler Member

    Joined:
    Jun 29, 2001
    Messages:
    14,388
    Location:
    Canberra

    Because if you're getting time on a super, you're paying for it (dearly) and you have a limited time window to do your work, so you want it to be done as efficiently as possible so you can get it done in your time frame for your expenditure. you've already optimised both the algorithm and code before you even get to the super.

    If you can fit your code and dataset into the memory available on each computer node great you're laughing but not every data set is going to fit on the handful handful of MB of cache on a CPU nor the handful of GB's a GPU node has, so interconnect speed is always important.

    The stuff CERN does at the LHC spits out about 1PB (yes petabyte) per second of data, interconnect is essential for that - they don't want to loose any collisions.
     
  8. chip

    chip Member

    Joined:
    Dec 24, 2001
    Messages:
    3,920
    Location:
    Pooraka Maccas drivethrough
    the L1 triggers discard a shitload of data at the sensor before writing it to anything - that petabyte of raw signal from a second in ATLAS ends up being about 100GB by the time it gets to the L2 software triggers
     
    Last edited: Nov 27, 2020
  9. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,588
    Location:
    Qld
    +1. Even "free" time a researcher gets on your nearby unis supercomputer would be well optimised, they've waited their turn.

    But fit for purpose ffs. Some applications prioritise flexible designs with freq releases, others are long-term stable performance-centric and specialised ICs make sense. Same as code, scalable for cloud is great, except when you know it's never needed then that dev time is just down the drain. There have been significant advances at each end of the spectrum.

    Too many experts with narrow views IMO.
     
  10. NSanity

    NSanity Member

    Joined:
    Mar 11, 2002
    Messages:
    18,183
    Location:
    Canberra
    Ahh yes. The more skilled and experienced you get - the more you realise the answer to most problems is... "well it depends".
     
    andrewbt and caspian like this.
  11. caspian

    caspian Member

    Joined:
    Mar 11, 2002
    Messages:
    11,483
    Location:
    Melbourne
  12. 7nothing

    7nothing Member

    Joined:
    Feb 15, 2002
    Messages:
    1,541
    Location:
    Brisbane
    Nah, I asked my vendor if they had any thoughts on why their SQL Server native client app went from taking 2 seconds to respond to a click to 8 seconds when we tested moving SQL from on-prem to Azure expressroute, and network latency went from <1ms to 15ms

    Response:

    Well you made the decision to go to Azure. Talk to your provider about lowering the latency.

    Only acceptable answer: if you want to go cloud with my app, alter the laws of physics first.
     
    wintermute000 likes this.
  13. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,588
    Location:
    Qld
    I don't understand how people can't get their head around software being designed with an environment in mind. There is nothing new about changes to database/software design being often required when you add some latency between the DB and App tier, and that doesn't make the original software poor in its design (nor does that mean that the vendor shouldn't be designing for it by now).

    You also need to take some personal responsibility for this. If you'd viewed some appropriate metrics, such as batch requests/second, it will often give an indication of how serial the queries are in nature and what sort of performance impact you get from some added network latency.

    The vendor could redesign the SQL query side of things to do things in larger batches, perform more filtering on the SQL side, and/or also move to a 3-tiered app architecture (things which also bring with them other benefits).
     
  14. Luke212

    Luke212 Member

    Joined:
    Feb 26, 2003
    Messages:
    9,995
    Location:
    Sydney
    On the basis of generic software eng., one of the quickest ways to find shit programming is to apply artificial latency.

    anyway this is my beef with machine learning. Just stupid brute force crap plus they are desperate to keep it going so Nvidia throws in fast interconnects... because $$$. It’s not sustainable. Better ways to do it. As usual I am way ahead of rest of everyone.
     
  15. phreeky82

    phreeky82 Member

    Joined:
    Dec 10, 2002
    Messages:
    9,588
    Location:
    Qld
    Your generalisations are tiring, but good for a laugh so keep going.

    Money invested in optimisation that isn't necessary is a poor investment and an inefficiency in itself. Investing more time to do so could also meaning missing a time-to-market milestone that is necessary. Maybe a programmer was lazy. Maybe they just had to rush it to hit a deadline, made some sales and are now profitable - you could call that shit, they probably see that as successful and a wise choice.

    Successful people are rarely perfectionists.
     
    Luke212 likes this.
  16. OP
    OP
    elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,082
    Location:
    Brisbane
    Let us know the name of your startup. Keen to invest in something so continuously ahead of the curve.
     
    3Toed likes this.
  17. GumbyNoTalent

    GumbyNoTalent Member

    Joined:
    Jan 8, 2003
    Messages:
    9,342
    Location:
    Briz Vegas
    Thanos is that name, and we aim to conquer the blood testing market.
     
  18. wintermute000

    wintermute000 Member

    Joined:
    Jan 23, 2011
    Messages:
    2,504
    I've had to explain bandwidth delay product so many times to developers/sysadmins/pointy-heads over the years that I might just stab someone in the eye the next time someone moans that they're only getting 200Mb or whatever mounting SMB to their Azure Files share

    IT WASN'T OUR IDEA, MAYBE YOU SHOULD HAVE ENGAGED BRAIN BEFORE DITCHING THE ON-PREM STORAGE USED BY I DUNNO YOUR GRAPHICS AND VIDEO GUYS DUH

    Same goes for ppl trying to SMB big files to an NZ office via VPN. "But its a giga bit connection"... observe as the magic of multi-threading lets me fill up the pipe to 1Gb, no fault found.

    No lets have a 'war room' to see if I can make physics or Microsoft behave in the way you want. Reminds me of that special client that would want to get VMware, HP etc. to modify their contractual T&Cs (and no, not ASX200 or close to.... I would literally struggle to not facepalm whilst the account guys did a slightly better job at containing their incomprehension)
     
    Last edited: Nov 29, 2020
  19. Fred Nurk

    Fred Nurk Member

    Joined:
    Apr 5, 2002
    Messages:
    2,240
    Location:
    Cairns QLD
    Something like Over-Unity Solutions limited? Seems to fit with 'continuously ahead of the curve'...
     
  20. PabloEscobar

    PabloEscobar Member

    Joined:
    Jan 28, 2008
    Messages:
    14,356
    Because the Salesman that buys the CIO Hookers and Lunch doesn't mention it.

    Cloud Desktops are the solution to high latency been Application and Database - and your Cloud-Guy will be happy to give you a quote.
     

Share This Page

Advertisement: