Looks like there's Epyc and Threadripper woes aplenty under Windows.

Discussion in 'Other Operating Systems' started by flu!d, Jan 5, 2019.

  1. flu!d

    flu!d KDE Neon 5.16

    Joined:
    Jun 27, 2001
    Messages:
    13,376
  2. General_Cartman

    General_Cartman Member

    Joined:
    Apr 28, 2005
    Messages:
    2,735
    Location:
    The Confedaracah!
    You could remove 'Epyc and Threadripper' from the thread title and it would still be accurate.
     
    ae00711 likes this.
  3. OP
    OP
    flu!d

    flu!d KDE Neon 5.16

    Joined:
    Jun 27, 2001
    Messages:
    13,376
    Haha!

    So true.
     
  4. ae00711

    ae00711 Member

    Joined:
    Apr 9, 2013
    Messages:
    1,366
    hahahahahhaha HAHAHAHAHAHAHAHHAHAHAHAHAHAHAA
     
  5. Quadbox

    Quadbox Member

    Joined:
    Jun 27, 2001
    Messages:
    6,218
    Location:
    Brisbane
    Y'know as much as I'm not likely to ever own one, and the clusters at uni are all intel, I'd be curious to run some of my mpi+openmp code on a node with a couple of epyc's and see what tweaks work best. Given each socket has two NUMA zones, that tends to make coding well for such a beast a wee bit tricky to say the least...

    On the dual-socket-per-node 14-cores-per-socket cluster (all xeons obvs) I've used recently it works out by far and away the best to have a seperate mpi instance per NUMA region and only openmp thread it across cores with uniform memory access...

    Where I'm going with this I guess is I wonder how much of the "performance woes" reported just come down to people's code not knowing how to deal with 2x NUMA regions per socket
     

Share This Page

Advertisement: