Discussion in 'Video Cards & Monitors' started by lowdog, Dec 7, 2018.
I'm not your bud guy .
I'm not your guy, pal.
lol this thread has turned into a power saving convention.... I guess nothing else to talk about
well not really as amd didnt give nvidia a FU
and failed miserably infact
Not sure about that. There were a few technologies that didn't get enabled in the original Vega. There is every bit of likelihood that they are enabled in Vega 2 (Radeon V (for Vega) II (roman numerals for 2)
If the 7nm Vega has working implicit primitive shaders, draw stream binning rasterization, and NGG. That could be a big thing.
Also, Nvidia scoffs at the Vega, saying it doesn't have AI. But Vega has Deep Learning. Whats the difference? Likely not much.
Ray Tracing cores in interesting too, but AMD have been working for years on open source software solutions that use Async Compute. Should be good.
yea yea. they've been doing it for years but never really even catch up.
boasting bs HBM huge memory bandwidth and huge numbers but performance is below standard.
when i see it i'll believe it. still good on them for adding healthy competition which effects prices we pay
It has been years for the top dog. AMD had the fastest card when the R9 290X came out end of 2013. So 5 years ago. But then Nvidia didn't have a faster card until mid 2015. That wasn't too shabby.
AMD faltered when they just rebranded the 200 series for the 300 series, and then not doing so well in the high end GPU space. The Fury needed more memory HBM1 didn't cut it. Then the mid range RX400 and it's rebrand as a 500 series, excellent mid range, crazy good. Remember how many people clamored for the RX 480!
Now were at the Vega and RX 500. Vega is fine, but they should be adjusting the voltages and memory frequency better out of the box, a simple tweak in the drivers gives a 11% performance boost to my old reference Vega 64. The RX580 is great for the price, and the RX570 kills a GTX 1050. But for some reason people still get Nvidia.
What I'm getting at here is that people are stupid, even when AMD have superior products, or products that are on par but significantly cheaper. People still buy Nvidia. For no good reason. Sheeple is all I can think of.
The issue with HBM btw is that it's better suited for DirectX12/Vulkan API's. It only has issues with shit old DX11 and below. Time to retire those old engines.
The last single GPU flagship card AMD produced that was better than Nvidia was the 5870, I know I still own one. Since then I have owned 680, 780ti, 980ti, 1080ti and currently 2080ti.
I highly doubt this trend is going to change In the next generation of GPUs unless Nvidia has issues moving to 7nm.
as a fyi the 290x wasnt the fastest card it was better value and almost as fast as the nvidia offering but still slower
their products are sub par and prices approximately the same
why would anyone want a space heater in their pc that costs as much as an nvidia card and delivers less performance and many more headaches
This card would have been a massive F U to nvidia, a giant win in the minds of everybody who hates RTX pricing, and Jensen would be right to be worried to the point of ranting
... if it was $150 cheaper than the 2080
But HBM is expensive so it isn't and the collective reception is "meh"
Also, nvidia cards can also be significantly undervolted on stock performance. On the MXM 1070s I was playing with last year under constrained cooling, 0.875V was enough to run the stock boost frequencies up to 1875MHz, down from 1.062V for a 20-30W saving off of the 115W limit
Again, the card that has been announced, isn't the card that this thread is talking about...
For me the last lot of good ATI/AMD cards were 970 pro/9800XT/X850PE AGP series, since then.............
X1900XT can't remember the brand but I think it was HIS, anyway the fan was set on way too conservative and didn't ramp up much so consequently the core was overheating all the time (even in Tasmania which is not known for hot weather) as I got weird triangles occurring in BF2142, anyway I downloaded the aftermarket ATITOOL but that allowed me to only set steps for the fan speeds so if say the max core temp was 90C you would do things like set fan at 100% for 80C, fan 90% at 70C, fan 80% for 60C and so on, end result was that when the temperature clicked over a temp threshold the initial surge in power to increase the fan speed blue screened my computer. Its not much fun playing BF2142 when it blue screens about once an hour. How stupid do these people have to be to not get something as simple as the fan speed curves right and especially not really ramping it up a lot when it gets close to the max. temp. of the GPU core? AND this was the top of the range card at the time as well and not some low end or mid range POS.
7970 introduction and it runs neck and neck with ALL OTHER games compared to the matching Nvidia card except for the only game I'm interested in playing which is BF3 at the time and it runs that at 70% of the speed of the Nvidia card, so thumbs down for that one even though after a year or so of driver releases it matched the speed but by then I was already on Nvidia cards and couldn't see a reason for going off them.
Didn't really notice as I was probably running GTX670's in SLI which worked well for BF3 and BF4 @ 1080p at a relatively inexpensive cost.
I remember when the first AMD card came out (whatever it was) with HBM memory when the lesser cards below it had ordinary GDDR memory and did this translate to a huge increase in frame rate due to the spectacular increase in memory bandwidth? well no it didn't as HBM cards were only slightly faster and only consistent with the increase in power of the GPU core so otherwise a complete letdown.
Maybe because I'm running a GTX1070 and so I'm not interested in any RX570,RX580,GTX1050 or GTX1060 mid range card regardless of the price, however, having said that I bought a second GTX1070 for the express purpose of running SLI in BF1 and that seems to be a dud and same again with BF5 so my next card that I am looking for will be a single card and possibly could be an AMD one if it exists. I am not interested in a top end AMD card that's 80% of the speed of a top end Nvidia card even if it is only 50% of the price, neither am I interested in the overpriced RTX cards and so I'll stay on 1080p ultra res. @ 100+ FPS for as long as needs be with my current card until I can get something reasonably priced regardless of whether its an AMD or Nvidia card.
The problem with DX12 for Nvidia is it cuts frame rates by about 20% and is visually indistinguishable from DX11 so no one uses it. What does DX12 do for AMD cards? is there an increase in frame rates? better picture quality for same frame rates?
In the past the NVIDIA cards worked well and a second card purchased later on when they were either cheaper or second hand got about an 80% extra in frame rates in SLI so if you needed more grunt that way that a single card couldn't provide (or the single card was way too expensive) then that's why people bought them. SLI always was a lot more mature and worked much better than AMD's crossfire.
Everyone has APU's. Semi custom is the key for AMD now - and is what they've focused on for a large chunk of there revenue. You don't need to have the leading GPU IP if you can meet a customers specific needs - why do you think Intel went with Vega for there new midrange laptop SoC ? clearly not because it's 'the best'. Ditto The two major consoles. The only reason the switch is an Nvidia Soc, is because it's basically and off the shelf item (Tegra X1).
Also, nvidia cards can also be significantly undervolted on stock performance. On the MXM 1070s I was playing with last year under constrained cooling, 0.875V was enough to run the stock boost frequencies up to 1875MHz, down from 1.062V for a 20-30W saving off of the 115W limit[/QUOTE]
Sorry, but 290X was dead equal with a Titan at worst, and slighter faster with 2nd BIOS (but noisy) , at a significantly lower price.
Nvidia then countered with the 780ti not long after and edged ahead . And that's the crux of it anyway - It was the last time Nvidia had to ever 'react' and counter AMD's release. That's the sort of competition we need.
Maxwell was the crushing blow - it was, and to a large extent still is ( Turing is still essentially the same architecture on the shader side of things ) an architecture focused 100% on gaming workloads - wasting no space, and no power to compute (hence why it lost it's advantage under compute workloads in modern games - i.e async compute in DX12/Vulkan)
AMD clearly had no intention to counter / follow suite on this path with a whole new architecture , they'd committed to GCN, likely didn't even have R&D budget to branch off two different architectures, and instead banked on increasing levels of compute in the dev industry, lower level API's and higher resolutions to remain competitive, but this just didn't quite Pay off - particuarly in regard to timing. Look at Fiji now, and save for it's lack of VRAM, when you throw modern titles at it, it's more than competitive with Maxwell.
None of this is any excuse for Vega being so off the mark though. There were many opportunities to bridge the gap somewhat - I honestly just don't think the motivation was there from the upper end of the organisation -which was very focused on Zen.
My point was if it was just a bit cheaper, VII could have been a F U - a major mindshare blow against nvidia since the market is ripe for some price competition, but this just matches the non-A partner 2080 MSRP of US$699.
Maybe this shows how much AMD really wants to undercut Nvidia - if they don't do it now with the 2080, even a token victory of something 10 bucks cheaper, will they do it later this year with whatever Navi comes up against - 2050/2060/2070?
AMD have move away from the PC master race and embraced the console peasant race.
Isnt this a bit of a warning for the rumored card mentioned by th OP
If the hbm is this expensive (half the reason for the price of this card)... doesnt it mean getting the card for $250US is probably out of reach simply based on that fact?
Depends which memory they use I guess. They still have GDDR based cards down the line.
Its chalk and cheese in terms of product placement. This is a low volume highend card. Not a high volume low/mid range card. In one aspect the nVidia CEO is right - AMD probably did decide late in the piece to release Vega VII, but only because nVidia's 2080 pricing allowed it to do so.
Vega VII is a compute card 1st, gaming 2nd. Vega VII is just rebranded Instinct MI50 gpu. AMD added a number of things to 7nm Vega/Instinct that has nothing to do with gaming;
brings support for half-rate double precision – up from 1/16th rate – and AMD is supporting new low precision data types as well. These INT8 and INT4 instructions are especially useful for machine learning inferencing
The GPU adds another pair of HBM2 memory controllers, giving it 4 in total. Combined with a modest increase in memory clockspeeds to 2Gbps, and AMD now has a full 1TB/sec of memory bandwidth in the GPU’s fastest configuration. This is even more than NVIDIA’s flagship GV100 GPU
as this is an enterprise-focused GPU, it offers end-to-end ECC, marking the first AMD GPU to offer complete ECC support in several years.
the GPU supports the recently finalized PCIe 4 standard
GPU also includes a pair of off-chip Infinity Fabric links, allowing for the Radeon Instinct cards to be directly connected to each other via the coherent links
Points taken from https://www.anandtech.com/show/1356...ct-mi60-mi50-accelerators-powered-by-7nm-vega
They walked away from PC's - they believed that discrete GPU market would be dead and buried by now. TBH they put themselves in this position and normally would say they deserve to fail hard - pretty much like they are doing, however it does leave the market with only 1 decent provider, maybe Intel can infarct bring some competition.