AMD’s Next Gen x86 High Performance Core is Code Named “Zen”

Discussion in 'AMD x86 CPUs and chipsets' started by Frontl1ne, Sep 11, 2014.

  1. The OC

    The OC Member

    Joined:
    Dec 11, 2004
    Messages:
    1,595
    Location:
    Melbourne
    No different to Intel: https://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page3.html

    DDR4-2133 to DDR4-3000 increases by up to 16 - 18% in bandwith sensitive games like ARMA and Fallout 4. The jump from DDR3-2133 to DDR4-4000 is 37% (!) in Fallout 4 and 26% in ARMA.
     
  2. Apokalipse

    Apokalipse Member

    Joined:
    May 29, 2006
    Messages:
    4,250
    Location:
    Melbourne
    Yeah, but that's not an architecture advantage, it's cheating by making code run slower on AMD/Via CPU's.
     
  3. The OC

    The OC Member

    Joined:
    Dec 11, 2004
    Messages:
    1,595
    Location:
    Melbourne
    I present numerous links to data showing the IPC gap being greater than your claimed 5% (plus data showing Kabylake being significantly faster than Haswell), and instead of presenting me with data or links that counters my point, you completely ignore my arguments and go off on a tangent and accuse Intel of cheating in benchmarks. That's an almighty accusation, but hey, you get an easy cop out. I can neither prove or disprove what you are trying to claim here, so if your 'closing argument' is that 'Intel cheats' rather than debating the topic properly, then there is really nothing more to be said.

    I'm done here. Not going to derail this thread any further. I've said my piece regarding the issue of IPC, people can make up their own minds on this matter. I'm not going to get baited into this whole 'ethics' debate.
     
    Last edited: Oct 15, 2017
  4. Apokalipse

    Apokalipse Member

    Joined:
    May 29, 2006
    Messages:
    4,250
    Location:
    Melbourne
    Really? You're not aware of Intel's compiler cheats? I thought it should be common knowledge by now:
    http://www.agner.org/optimize/blog/read.php?i=49

    There are even patches made to remove it:
    https://forums.guru3d.com/threads/i...intel-c-compiler-will-criple-your-cpu.403826/

    Basically what it does is check to see if the CPU ID returns the GenuineIntel string, and tells it to run a slower code path if it doesn't.
    There's a benchmark where somebody spoofed the CPU ID string on a Via CPU to say GenuineIntel, and got better performance.

    Heck, if you don't think it applies to Ryzen, you're wrong. Somebody even spoofed the GenuineIntel CPU ID string on Ryzen CPU's and got better benchmark results (this can only be done through a virtual machine on Ryzen CPU's). You should be able to find it by googling "Intel Ryzen".

    But OK, you've apparently chosen to have your feelings hurt and don't want to talk about it.
     
    Last edited: Oct 15, 2017
  5. The OC

    The OC Member

    Joined:
    Dec 11, 2004
    Messages:
    1,595
    Location:
    Melbourne
    Cinebench is the only benchmark that we should pay attention to, and that Intel isn't able to fudge results. It is the one and only standard for measuring IPC. Gotcha. :thumbup:

    Seriously, I don't want to start a shit storm here. Its the AMD forum after all. If you think that the data I presented is completely invalidated, then so be it. *shrugs*
     
  6. Apokalipse

    Apokalipse Member

    Joined:
    May 29, 2006
    Messages:
    4,250
    Location:
    Melbourne
    Now you're just making a strawman instead of addressing the point.
    I didn't say only use CineBench. It's just a good benchmark to use, which measures only the CPU and isn't crippled by Intel's compiler.

    It is a fact that Intel's compiler cripples performance of non-Intel CPU's, so you shouldn't use programs compiled with Intel's compiler to see how IPC compares. You want apples to apples.
     
  7. mAJORD

    mAJORD Member

    Joined:
    Jun 4, 2002
    Messages:
    11,786
    Location:
    Griffin , Brisbane
    If you look at Zen's architecture in its current form, it's execution resources pretty much fall in line with haswell levels, though geared more towards throughput than single-threaded IPC. It's probably worth bearing in mind that the architecture is designed for throughput, and has very high SMT yield as can be seen in stilts results.

    When one talks IPC it's traditonally been that of sending a single thread down a core, known as '1t' mode. This is most accuratly achieved by having SMT completely disabled.. in fact as far as i'm aware, you have to do so to truly measure ST IPC, and I don't actually know if stilt did this, will have to ask.

    This is quite important in Zen v1, because there are several resources statically partitioned in SMT mode, which means they're split 50/50 with two threads, regardless of the CPU time require for each. When the O/S is scheduling lightly threaded work that won't completely occupy a core's two threads, but is actually utilizing it for other lighter threads this can be problematic, as the 'main' thread which needs maximum performance only gets half the resources.

    Mike clark conceeded openly this not ideal, and is something that will be elimnated in Zen 2 no doubt. and be one of the things to increase ST ipc. assuming its implemented sucessfully.

    Now it's possible to measure 'ipc' in regard to throughput also, that is if you send two threads down a core , and measure performance this way.. This is what you're measuring when you compare SMT / HT enabled processors with equal cores. Ryzen has very good throughput, and is why some clock/clock test results say Ryzen has near equal IPC to skylake/kbl/cfl..


    As for AVX2 - it's completely irrelevant other than for academic purposes (and that's why stilt tested it ) to compare avx2 or avx2-512 ST ipc, because, put simply, virtually ALL AVX2 code is able to scale to x # of cores, and also as such, would take full advantage of SMT. It also means putting smaller, more power efficient cores in higher numbers (moar cores) is a completely valid method, and one that doesn't compromise integer and legacy FP/SIMD performance.

    The wider AVX2 and AVX2-512 data paths intel have gone with have such significant frequency, and power comprimises, it makes IPC comparisions a little pointless.. that's why there are avx2 offsets.

    E.G If you have 6 cores, and 1.3x the AVX2 throughput in typical applications (just to pick out some of the avx2 heavy benchmarks ), but can only run at 4ghz @95w , vs 8 cores with 1.0x throughput and can run at 3.7ghz , do the maths:

    6x 1.3x4 = 31.2
    8x1.0*3.7 = 29.6

    Virtually identical performance at the end of the day. And this is precisely why AMD did not go for single cycle 256b or 512b AVX2. and probably won't any time soon. I guess there's a chance the density and perf/watt uplift of 7nm might make it a possibility, but i still think it's unlikely.



    It's not just about being at the start of the architectures life. When you consider the design is not as 'geared' towards ST IPC as intel's architecure, it is quite unexpected that it would be lagging by only ~10% IPC on general integer/FP workloads. The distributed int schedulers, and dedicated execution resources are not as flexible as Intel's huge, Unified int/FP scheduler and the Port execution system used since P6.. It really shouldn't have IPC this good , and seems to highlight the difficulty Intel has in scaling the instruction window larger , and translating that into ST performance.

    Simply scaling these resources up should, in theory more linearly increase 'IPC' than what we've seen with Intel , likewise with a more linear increase in transistor/area/power and that's what i'd personally expect will be the case with Zen 2 and 3


    It's certainly true that there's a larger percentage delta in peak frequency than IPC, but as i've already said, you have to look at the target markets, before 'suggesting' what AMD need to scale frequency to.. Literally, gather together all the SKU's with higher frequencies than AMD, under all-core, and single core turbo boost, and look at there share of the market. Probably not worth rushing into outsing the 8700K for example, since you can't even buy it. ;)
     
  8. The OC

    The OC Member

    Joined:
    Dec 11, 2004
    Messages:
    1,595
    Location:
    Melbourne
    Very informative post. Thank you :thumbup:
     
  9. The OC

    The OC Member

    Joined:
    Dec 11, 2004
    Messages:
    1,595
    Location:
    Melbourne
    In your opinion, what other benchmarks (apart from Cinebench?) can we use then? Genuine question. Is Blender OK? Or is Intel cheating there too?
     
  10. Bertross

    Bertross Member

    Joined:
    Feb 2, 2009
    Messages:
    13,782
    Location:
    East Gipps
    at the end of the day the best way to judge is to use the apps in your day to day and see if it saves times or increases fluidity in your workload/games.

    For synthetics just grab a lot of the popular ones and test to compare what the reviewers are showing to what you previously had.

    You will notice in a lot of cases (1080p gaming aside) that alot of the newer CPUs unless your are encoding or rendering etc you wont notice much difference. We have come to a point where HEDT is merging with desktop and unless you need the cores go with the bang for buck. 8700K is pretty damn good along side a 1700x/1700. 1800x needs to come down in price.
     
    Last edited: Oct 16, 2017
  11. Court Jester

    Court Jester Member

    Joined:
    Jun 30, 2001
    Messages:
    3,634
    Location:
    Gold Coast
    why put 108p gaming aside

    it is as valid a use case as any and in here ryzen cant come come close to touching intel and is often 20++++% behind.
     
  12. dirkmirk

    dirkmirk Member

    Joined:
    Apr 3, 2002
    Messages:
    5,666
    Location:
    Shoalhaven - Gods Country
    I'm a bit out of the loop when it comes to modern gaming but higher resolution gaming used to mean progress.

    I can accept that 4K is that level where you really need exponential power increases to game smoothly, If 1080P was a standard resolution 10 years ago why cant 1440P be the standard resolution today?

    Sure 1080 shows a difference in more powerful CPUS but would someone really choose a CPU because if it games better at 1080 but largely irrelevant at 1440?

    If its not a compelling choice their is no point in future proofing but then again im a tightass...
     
  13. Court Jester

    Court Jester Member

    Joined:
    Jun 30, 2001
    Messages:
    3,634
    Location:
    Gold Coast
    1080p is still the most popular gaming resolution, also there are many high hz screen that are 1080p that people buy who want high frame rate gaming

    also 1080p is showing the difference as it is not gpu bottlenecked with faster gpu's in a couple of years when GPUS get more powerful I would expect the gpu bottleneck on 4k resolutions to be largly removed and we will see similar difference at that resolution as we do currently at 4k. 1080p is testing the CPUS's capability in gaming 4k currently is just showing a GPU bottle neck exists at that resolution still.

    Hency why 1080p gaming scores are relevant today if you dont always upgrade your cpu/mobo and just the GPU
     
  14. adamsleath

    adamsleath Member

    Joined:
    Oct 17, 2006
    Messages:
    19,089
    Location:
    Sunnybank Q 4109
    sounds interesting, but i feel very dumb now :lol:

    this seems significant ie it may hamper single thread operations?
    ------
    regarding clock speed limits....i googled this ----^ which im guessing boils down to how responsive the transistor tech is, and 'the weakest link in the chain'
    :confused:
    and the physical nature of the switching characteristics of the transistors...must be a reason for the exponentially rising voltage required to reach higher clock speeds....not just thermal limits...

    https://electronics.stackexchange.com/questions/122050/what-limits-cpu-speed - more explanations
     
    Last edited: Oct 16, 2017
  15. Bertross

    Bertross Member

    Joined:
    Feb 2, 2009
    Messages:
    13,782
    Location:
    East Gipps
    I Don't use it thats all. I havent in 10 years. I got nothing against high Hz 1080p LCDs for competitive games.
     
    Last edited: Oct 16, 2017
  16. gregpolk

    gregpolk Member

    Joined:
    Mar 4, 2004
    Messages:
    7,404
    Location:
    Brisbane
    Because its meaningless to some users who don't game at high hz 1080p, and the hypothesis that in 5 years time when you're upgrading the GPU but not the CPU you'll now be bottlenecked by the CPU if you went AMD not Intel doesn't seem to have any basis in reality because in the case of ryzen it assumes that there will be no advance in use of multithreading in game development, which with even intel adopting mainstream higher core count CPUs now and ryzen doing well at hitting the mainstream and consoles pushing for higher core counts, seems ludicrous.

    So if it makes no difference to your gaming now, and there's no evidence to show that it'll make a difference to your gaming in the future, but it does make a difference to your other work loads now, then its a very sensible purchase.

    Yes 1080p is the most popular resolution, but so is 60hz not 200. But yes, if you're a professional counter strike player, I agree that kaby or coffee is a better choice.
     
  17. Sledge

    Sledge Member

    Joined:
    Aug 22, 2002
    Messages:
    8,693
    Location:
    Adelaide
    This is still a thing???
    :confused:
     
  18. Bertross

    Bertross Member

    Joined:
    Feb 2, 2009
    Messages:
    13,782
    Location:
    East Gipps
    i would say more e-sports games more then anything. Gotta hit those 500 frames!
     
  19. adamsleath

    adamsleath Member

    Joined:
    Oct 17, 2006
    Messages:
    19,089
    Location:
    Sunnybank Q 4109
    and depending on your grfx card and settings its 0% behind :lol:

    and i totally agree, multicore is the way its going. a couple hundred jiggahurts per year on average wont cut it.

    didnt a 2600k (2011) hit 4.8+ ghz?

    i spose transistor count goes up with every node shrink....and it looks like 5nm is possible even now.

     
    Last edited: Oct 16, 2017
  20. gregpolk

    gregpolk Member

    Joined:
    Mar 4, 2004
    Messages:
    7,404
    Location:
    Brisbane
    Of course. CS:GO is huge. Quite good to watch as well, its quite well set up for spectators and commentary.

    Thats what I meant really, but CS and Overwatch being the key pro games where the high refresh will help. I'm not sure playing Dota at 240hz will make any difference (although I cap mine at 120 instead of 60 because it makes me more of a l33t pro)
     

Share This Page

Advertisement: