Z77X-UD5H Performance Review So if you read up on my preview of the Z77X-UD5H you saw a physical review in which I went over every section of the motherboard, top to bottom I examined its every chip and described what to expect when you toss a CPU into the board. So today that is exactly what I will do, toss in a Sandy Bridge 2600K into the UD5H, first to gather how the system performance, how the BIOS acts, how the overclocking is, and examine many of the Z77 chipset’s new goodies. For beginners one of the biggest gains of Z77 for most is the inclusion of native USB 3.0 which not only is great for performance but also allows board makers to use more 3rd party controllers for things such as dual NICs and extra SATA6GB/s because there are only 8 PCI-E lanes from the PCH to the 3rd party Ices and usually at least two of those are occupied by USB 3.0 controllers. However there is one other interesting feature which isn’t part of the Z77 PCH, but rather one that has just been chosen to launch with Z77 and that is Lucid’s MVP. So I personally have been messing and testing for a while now with MVP, I have analyzed it, researched it, and finally put it through some analysis with micro stuttering to see what is really happening. I don’t think you will see an analysis quite like this anywhere else on the web, so please sit back and relax and enjoy the review . For beginners I will start off where I left off, so if you didn’t read my physical review, please do so now. However it isn’t required unless you want some sexy shots of the board and some explanations of what everything does: Z77X-UD5H Physical Review Now after testing the Z77X-UD5H with both 2nd and 3rd gen CPU, I can say the new boards are solid. Here are the main differences: So to begin: BIOS Gallery Overclocking Results Benchmarks using 2600K MVP Testing and Results I/O Testing GIGABYTE Auto-Tune New* OC Program GIGABYTE Utilities First off the BIOS: So if you saw shots of GIGABYTE’s X79 BIOS, this isn’t much different except for the fact that the board is a picture of a UD3H for all full-sized models and the mATX boards all look like a Z77MX-D3H. However two things have changed, first of all the BIOS is much more responsive, it is actually feasible now to use a mouse in the BIOS as it will move as you move and not 2 seconds after. Second of all the BIOS seems very stable, the features are abundant, and we have a lot of extra memory timings settings. There is a Stability level setting in the memory section, which is important for 2800mhz+ OCing, it doesn’t’ change any latency, rather a resistor setting. Fan control for all fans is there, you can pick whether you are using a 3-pin or 4-pin (voltage or PWM) mode, and you can set CPU SMART fan speed separate from that of the other system fans. It isn’t done individually per connector, which some still want, but this way is a bit easier. Two major improvements for people who don’t ever use 3D Mode, first of all if you exit out of advanced mode you will enter the BIOS again in advanced mode instead of 3D mode. Second, number lock is engaged at default, and the BIOS won’t freeze if you toggle it!! This new UEFI version is much better than X79s. Sandy Bridge On Z77 Overclocking Results: Here are the LLC readings for the UD5H BIOS F6G. The GIGABYTE Z77 boards are a HUGE increase over the Z68 boards, while the Z68 boards had a lot of issues, these boards are the exact opposite. Their OC potential is there as well, even in the lower phase count models. It is amazing how much work has gone into making the BIOS and the hardware great enough to OC to the max your CPU and memory can OC. There have been extra BIOS options added for memory OC, so the user can tune at their hearts desire, and then you can find tweaks others did not on other boards and the same board. That is what Z77 GIGABYTE offers, they are using their great ability to turn out a quality product, and that quality is what makes their boards OC great. Let me say this, the problems that Z68 and X79 GIGABYTE boards have/had are all gone for Z77 I will talk more about OCing and the BIOS when the 3rd generation core processor is released.. My new OC bench is using a Zalman Liquid cooling system, the Zalman CNPS20LQ which is brand new, and comes in a pretty easy to fit form factor. It can be put where you fit most 120mm fans, and the rad isn’t that thick, however performance of cooling is really great. I was very surprised at how cool is kept my CPU. I have a Antec Khuler 920, and this Zalman has a smaller radiator, however with only one fan(which runs at lower RPM) it cooled better than the Antec. I used the same Kingston 16GB quad channel kit, but this time in dual channel and on Sandy Bridge. I was easily able to use the XMP profile, and with a slight bump in QPI/VTT and IMC voltage I was able to get this baby up and running. Please note that you will need to set IMC voltage exactly 0.005v lower than the QPI/VTT for the voltages to increase or decrease from stock, for 3Rd generating this isn’t an issue as you don’t need to change these voltages for anything on air, but for Sandy bridge you need to change up these voltages, so you can remember to do that. Blue seems to be the color of Z77 for GIGABYTE, I personally like black better, but I can’t always have it my way. The Zalman was easy to install and was actually easier to install than the Antec because you don’t need to use a screw driver. BLCK does very well on this board with Sandy we are doing pretty good for air cooling. Benchmarks: So I had to go and basically redo all my older 3D benchmarks because for this review I updated all drivers, BIOSes, and benchmark versions to the most current. I only do this at a big launch because I like to keep everything consistent down to the 3DMark Vantage version and the GPU driver version when I bench 3DMark for instance. Variations can cause a lot of change; some of these scores are hundreds of points higher than my previous scores only due to GPU driver and benchmark version changes(it shows you how long it has been since I updated my drivers and benchmarks). All Benchmarks run all at 4.5ghz to make sure clock for clock. MVP Analysis: First of all if you want to know what everyone else does about MVP, more than what I have to say, please read up on it through Lucid’s Whitepaper on MVP, it is really easy to read and explains everything much better than anyone else could. Here is the White paper. It isn’t technical, and all the pictures you see on articles like adnatech’s come from it. Now is it just me or are we seeing huge boosts in iGPU technology recently? For me that points to the evolution of iGPUs increasing in performance at a much faster rate than the dGPU. It is leading to their actual usefulness for processing. As we saw Sandy Bridge’s iGPU is extremely good at video transcoding, however is it just as good at dealing with discrimination? The discrimination of frames to be processed and not to be processed is a huge task which has been offloaded to the iGPU with the introduction of Lucid MVP. It offers users that HUGE performance boost in their FPS, and it is what all the hype is about. So what the heck is MVP? Well in short MVP is a program made by Lucid, the guys who made the iMode and dMode switching program we saw on Z68(Virtu) as well as the Lucid Hydra which was a multi-GPU PCI-E bridge which wasn’t really that good. This time however they have been at work with something special. Lucid says that by being able to take into account human responsiveness as well as screen refresh rates they are able to process things more efficiently, but not at the level at which you think. They don’t speed up rendering, instead they allow you to not render things you would never see and then they also move some of the dGPU’s tasks to the iGPU, this is called HyperFormance. They also have a new VSync option that allows VSync up to 120FPS, called Virtual Vsync. Basically MVP allows your iGPU to work in tandem with your dGPU, but not like in SLI. In fact increasing the iGPU frequency or memory allocation doesn’t help the MVP scores at ALL!! So they aren’t using the iGPU to process the actual rendering, however the iGPU is being given the output buffering task, and the program is allowing the total removal of frames in the processing que which the user will never see on the screen. MVP takes control of the information following to the dGPU as well as that which the dGPU sends back to the CPU and iGPU, so that the dGPu can be better managed. Here is my own diagram on how it works: The iGPU and CPU are first determining what not to process and what to process and in what order. Then those frames are sent to the dGPU where they are processed, and at this point the iGPU knows what frames it should get back and at what time since it has calculated all of this, it then processes the output buffer and the image is displayed on the screen. Since the iGPU and the CPU control the flow of information to and from the dGPU at all times, they are basically making the dGPU the slave and the iGPU and CPU the master, which in turn allows hierarchical organization of the information being sent up and down stream. This should lead to much more efficiency processing, however this “peer to peer”, more like “master-slave” communication is going to cause some microstutter. We will take a look at that later though. HyperFormance, because of what it does, greatly increases the FPS measured, as while the frames aren’t being rendered faster, the scenes as a whole are being rendered at a faster rate. So instead of rendering all 2000 frames for a scene the GPU will only render 1000 or so, now if the GPU does this at the same rate as it did before then the scene can be rendered in half the time. MVP takes into account your responsiveness and it also uses a pretty good algorithm to do this as the picture quality doesn’t take much of a hit, at least I couldn’t see the difference. Below I have added a picture, please tell me if you can chose which is with MVP and which is without, but please note this is the first image I saw when the level started: I will show you which are which later. I expect this program not to be perfect, I expect it to have flaws, however why would the picture be worse at a given frame? Is the GPU rendering worse than it would normally? The answer is no, however the way the frames are being put together would allow there to be issues such as when there is tearing, and that is where their Virtual Vsync comes into play. I left Virtual Vsync on and I guess it works in the background because my FPS in this game at this resolution I used are extremely high, like easily in the hundreds. I did however have no tearing whatsoever. In the benchmarks above we saw the huge performance gains from MVP, so I used one of the games that worked flawlessly with MVP, which was Call of Duty: MW4 for my analysis. In my mind the HyperFormance, the mode of the program which allows for this removal of frames, is like a broader implementation of Z-Culling which is a technology from NVIDIA which eliminates pixels that the user will never see from being rendered. In this case the elimination of whole frames is taking place. This huge rise in FPS exploits vulnerabilities in many popular benchmarking programs such as 3DMark, which angers not only 3DMark, but it also makes SLI and CrossFireX looks un-needed since MVP doesn’t fully support SLI or CrossFireX. The pure FPS number gives the user a feeling of “OMG that is damn fast!!! “ However that FPS number isn’t always as accurate as it can be, and it doesn’t take into account when there is an issue with the removal of frames. Sometimes there will be an error and the previous frame will be displayed again instead of the frame that was supposed to display. Lucid says this is a bug. Now there is a slight amount of micro-stuttering as we said before because of the master-slave communication, as I like to call it. There is a lot of it going on in the background, as you would expect because of the dual GPU (iGPU and dGPU) which means two things. First of all it means that there can be large spikes of performance followed by low drops, however we don’t seem to see this affect in the games, but it also means that there is actual master-slave communication going on between the dGPU and the iGPU+CPU. Which means the iGPU isn’t being used just for show, but is really dealing with sorting tasks in which a GPU is needed for but which the dGPU isn’t being bothered with. So to test this we use a program made by a user on XS, and he wrote a small DOS script which takes the FRAPs output of game play, and gives you a % index of microstuttering, and a fixed average FPS! So it takes into account the microstutering and gives you a more normalized frame rate that you actually witness. Here are the FPS graphed in excel: I used a single GTX570, then turn on hyperformance, and then used SLI GTX570s. Here are the results from his program: Here is a table with them organized, as well as a percent increase over a single card: So it seems that when the average FPS is basically normalized, it drops right below that of the FPS when I used SLI. That to me is exactly what MVP should be chiming in at, it shouldn’t be faster than dual GPU, but it should be a lot faster than single; however being way above both single and dual GPU is a bit unbelievable. This type of analysis makes the FPS of MVP more believable, so I would encourage 3DMark to make create something like this for when it detects MVP. To me that is more of a real FPS number, which in my opinion makes a lot more sense than one that is sky high. The program takes the difference and then averages it out, even though the effects of the microstutter aren’t noticeable, that index is pretty high. The peer to peer communication still exists, but not as peer to peer communication. However I did run the tests a bunch of times, and sometimes the index was as low as 25% and sometimes as high as 65%, the only difference is how I play and what I do in the game. Above was when I got it down to a steady execution of gameplay in which little variance could be observed. So folks, if you want to see MVP’s impact: Above is MVP without having its FPS altered by the program I used above. BTW here are those pictures: Bottom-Line: Lucid MVP isn’t a gimmick, it isn’t just a piece of software manipulating FPS, it isn’t just another broken piece of software from Lucid. MVP really is something special; it is part of the evolution of the iGPU, and is needed for further enhancements in such areas of computer engineering. Not because of the high FPS rating, but because of what it stands for and what is does. So please don’t be offended by its high FPS, don’t be mad that it can beat your 7970 in FPS flat out, don’t be sad, instead be glad that we are being provided software that has accomplished what no other GPU manufacturer has been able to accomplish. This type of software and thinking benefits us! It for once doesn't benefit the big GPU companies, but it does increase GPU performance. This type of software is what we should be seeing from small companies like LucidLogix, this is a great idea which they have executed very well, they have been upfront with issues, and asked users to report problems, as well as honestly made a program that does what it says. So try it out and see how you like it. It seems like Lucid is doing a good job to taking feedback, so if you have issues please report them!