AMD Radeon RX5000 Series - First "Navi" Cards

Discussion in 'Video Cards & Monitors' started by the_antipop, May 27, 2019.

  1. Hotrod2go

    Hotrod2go Member

    Joined:
    Jun 17, 2018
    Messages:
    271
    Location:
    Tas
    Very true indeed. Despite the criticisms of HBM, it has benefited this consumer.
    Last 3 yrs of gaming for me have been very satisfying with my ageing R9 Nano, 175w TDP & compact to boot, it was ahead of it's time imo.
    Not one for mega monitor gaming & multiple cards, I'm a 'humble' gamer & don't give a toss what anyone thinks of that. Besides my fav games top out at 60FPS anyway. :)
    Best value $ card I ever bought since fiddling with computers some 20 yrs already.
     
  2. RnR

    RnR Member

    Joined:
    Oct 9, 2002
    Messages:
    13,096
    Location:
    Brisbane
    You seem to be woefully uneducated in the timescale needed for modern day games development.
     
  3. bart5986

    bart5986 Member

    Joined:
    Jan 31, 2006
    Messages:
    3,757
    Location:
    Brisbane
    DX12 was announced March 20, 2014, launched July 29, 2015.

    Which games fit into your category of being in multiyear development and being unable to have proper DX12 support because of that?

    The real answer is that its a mess and the developers, Nvidia and AMD are responsible for various issues that come along with it.
     
  4. mAJORD

    mAJORD Member

    Joined:
    Jun 4, 2002
    Messages:
    9,797
    Location:
    Griffin , Brisbane
    What are you talking about? DX12 is an API. How does a GPU manufacturer 'switch to' an API?

    Both DX12 and HBM are industry wide technology progressions. Nvidia use HBM also because it has benefits in certain applications.


    [quote AMD just don't seem to understand the industry at all[/quote]

    hmm.. yeah, sure they don't haha. You should send them your insights.


    AMD are right to say this is no longer GCN architecturally

    Rough calculations suggest, if the performance numbers (+30% > Vega 56 @ ~1750Mhz) are correct, the raw perf/clock / shader Efficiency has indeed gone up at least 1.25x , and this really resets the bar for scaling again.. You're looking at 90% of the performance of a 3856sp/60CU Radeon VII with 2560sp/40CU's @ the same clocks, and 75% board power

    This is far larger than any Iteration of GCN.
     
    Last edited: Jun 14, 2019
    SnooP-WiggleS likes this.
  5. Sipheren

    Sipheren Member

    Joined:
    Aug 14, 2002
    Messages:
    3,375
    Location:
    Gold Coast
    Yeah, I was skeptical when they first said it was a 'new' arch as I was pretty sure it was still basically GCN but it seems they have done some major work, rDNA2 will be very nice from what this is showing.

    With some fund from the CPU side they are finally able to update the aging GCN and produce something that might compete with the Maxwell based stuff.
     
  6. bart5986

    bart5986 Member

    Joined:
    Jan 31, 2006
    Messages:
    3,757
    Location:
    Brisbane
    By using certain GPU hardware and drivers that runs DX12 well and without issues.
     
  7. chook

    chook Member

    Joined:
    Apr 9, 2002
    Messages:
    1,196
    Those pesky hardware manufacturer's need to stop supporting API developments with their new products. It is going to ruin the industry if they don't.
     
  8. Sipheren

    Sipheren Member

    Joined:
    Aug 14, 2002
    Messages:
    3,375
    Location:
    Gold Coast
    I dont really want to get into this but what you are saying makes no sense. DX12 and Mantle work well with GCN because they happen to be better at utilising the async abilities to keep all the CU's working. GCN wasn't built with DX12 or Mantle in mind.

    AMD went with HBM as it was better and at the time the price wasn't to bad, by the time they were able to go into mass production (first Vega) memory prices had gone insane and it pushed the card costs too high, not like they could really predict that.

    It's like you guys think they shouldn't try to push new and interesting technologies and keep open standards. The reason most people dont like nVidia is because they push closed systems and are happy to shit on their old hardware to hamper competition, why would you want AMD to do the same types of things?
     
    SnooP-WiggleS likes this.
  9. flinchy

    flinchy Member

    Joined:
    Jan 18, 2007
    Messages:
    6,355
    Location:
    Northlakes
    they have competed at the top end in the last 15yrs though, multiple times.

    it didn't always work for them cos nvidia has mindshare, but they've definitely been the #1 pick multiple times.

    also, AMD isn't doing well in the midrange, the RX580 is cheaper and faster than the 1060, but the 1060 outsells it like 10:1 or some bullshit lol.

    memory prices are down but HBM is still expensive. hbm will always be expensive cos that's just how it be
     
  10. bart5986

    bart5986 Member

    Joined:
    Jan 31, 2006
    Messages:
    3,757
    Location:
    Brisbane
    GCN was not developed by throwing a bunch of random ideas together, it was developed based on what AMD thought would perform well in the future, and they were right, but also way ahead of their time.

    Its also important to remember that AMD developed Mantle, which was taken and turned into Vulkan.

    I think the only thing AMD has done wrong is bet everything on something happening in the future and its been so much slower then they expected.

    This is why their OpenGL support is terrible and they won't fix it, because they are hoping the day will come that we won't rely on OpenGL. That day is likely in 2020 at the earliest.
     
  11. Sipheren

    Sipheren Member

    Joined:
    Aug 14, 2002
    Messages:
    3,375
    Location:
    Gold Coast
    My understanding was that GCN was developed primarily for the datacentre (which is why it works so well there, massive floating point performance) and due to AMD's budget it also had to be used for consumers. Mantle was developed to help get the most out the arch that developers seemed to not be able to do with DX11 and MS also based DX12 on Mantle but it does require far more effort from the devs to work well.

    Hopefully now they can afford to build a better gaming arch that devs can work with now that they have some r&d budget. They weren't wrong in going for the datacentre markets though, when you are small and broke you have to invest where the return is.
     
  12. Court Jester

    Court Jester Member

    Joined:
    Jun 30, 2001
    Messages:
    3,629
    Location:
    Gold Coast
    lol it worked so well it saw them relegated to an irrelevant position in the graphics card industry unable to compete with nvidia for many many many years now

    yes it worked REALLY well
    .....
    ....
    ...
    ..
    .
    FOR NVIDIA!!!!
     
    bart5986 likes this.
  13. Sipheren

    Sipheren Member

    Joined:
    Aug 14, 2002
    Messages:
    3,375
    Location:
    Gold Coast
    Another look at the arch, the price does seem to be a bit of a shame, but I guess they want to make as much as possible. Shame the efficiency hasn't really hit the mark.

     
  14. FIREWIRE1394

    FIREWIRE1394 Member

    Joined:
    Sep 20, 2012
    Messages:
    902
    I've decided I'm not buying any of these new cards (not even low end)
    They've forgotten where they came from! AMD is 50, ATI would have been 34, the HD 5000 series is almost 10.

    IF they call the POVO spec navi the RX 5450... And the single best video card they can make with navi the 5970. I MAY change my mind, but unless that happens I'm going to quietly boo and hiss them for the prices.
     
  15. Sipheren

    Sipheren Member

    Joined:
    Aug 14, 2002
    Messages:
    3,375
    Location:
    Gold Coast
    It is a bit unfair to blame AMD for the pricing, Nvidia are the ones that moved all the pricing upwards over the past few gens (due to no competition). AMD would have a hard time convincing investors that gamers wont like them charging the same price Nvidia are for the same performance.
     
  16. SnooP-WiggleS

    SnooP-WiggleS Member

    Joined:
    Sep 20, 2004
    Messages:
    3,013
    Location:
    Ballarat, Victoria
    I'll happily blame AMD for their pricing. They could sell it $100 cheaper and still make a good profit per card and win marketshare and mindshare from gamers that Navi is designed for. But like turing they've got old cards to sellout of first.

    True power efficiency isn't where it needs to be ultimately but a good start. I suspect they've done a rx580/rx590 etc and bumped the clocks 10% up from where it's power efficient (ie drop clocks 10% and it uses 30% less power kinda deal). I think adoredtv's take is about as negative as you'll get from the tech press (similar to his view on classic vega not good enough video). As mAJORD pointed out already they've the performance per CU unit is a big jump, which is a good sign for scalability. Die size is good for the performance again and bodes well for a big navi 20 could easily double the CU count (but with slightly lower clocks and further design tweaks and 7nm maturity to keep power at bay) without being a monster 2080 ti style die size. Moore's Law is dead has more positive take on the Navi architecture release with lots of speculation on what we'll get for navi 20.

     
    RnR, nope and Sipheren like this.
  17. maxrig

    maxrig Member

    Joined:
    Jul 28, 2011
    Messages:
    618
    Fark me , I just got con again watching Adoretv and his stooges BS.
     
    Last edited: Jun 15, 2019
  18. itsmydamnation

    itsmydamnation Member

    Joined:
    Apr 30, 2003
    Messages:
    10,376
    Location:
    Canberra
    you really are a total moron......

    GCN pretty much stayed the same from 2012, to 2019

    NV went , kepler , maxwell , pascel, volta , turning in the same period each having changes to the SM structures.

    In that time AMD gutted its R&D to stay alive (https://ycharts.com/companies/AMD/r_and_d_expense ) and prioritized Zen development.
    Despite that GCN cards still where able to compete across the board, GCN was so far ahead in terms of capabilities ( Tahiti being Tiled resources level 3 etc) that they have still been able to produce products that compete feature wise without having to actually develop any new features. There is a reason stadia is using GCN based cards for example, thats because NV still cant actually do async compute and GCN card do fine grained allocation of resources ( SR IOV) ie run mutli users from one card and give a guaranteed performance level.

    So from that perspective GCN has done really well, if AMD had a uarch like Kepler at the time of Tahiti's release they would be in a far worse position right now.

    But its ok CJ because your beloved intel can't do anything but version 83 of haswell on the same 14nm process AMD is going to have plenty of money to resume GPU development :).
     
    Zenskas, RnR and Sipheren like this.
  19. nope

    nope Member

    Joined:
    May 8, 2012
    Messages:
    2,261
    Location:
    nsw meth coast
    i blame amd and nvidia for stupid pricing, their needs to be another entrant in the ring that disrupts the market like chinese ram and... ironically amd zen
     
    Sipheren likes this.
  20. Sipheren

    Sipheren Member

    Joined:
    Aug 14, 2002
    Messages:
    3,375
    Location:
    Gold Coast
    With some luck Intel might be able to shale things up, they have the budget to build a good GPU and could price it pretty well...
     
    nope likes this.

Share This Page

Advertisement: