Our finding and experiment about RX 5700 series

Our experiments on Radeon VII show that the maximum efficiency of the 7nm Vega is at an operating point of around 1.5 - 1.6 Ghz with a voltage below 1000 mV (default is around 1.8Ghz 1080 - 1100 mV). We expect similar things to happen also in Navi 7nm

The Radeon RX 5700 'Navi' comes with 1x HDMI 2.0b and 3x DP 1.4 display output configurations with Display Stream Compression (DSC) 1.2a support. This will later allow next-gen displays with DSC support to run configurations such as 4K 144Hz without chroma sub-sampling with only a single display cable, or even more extreme configurations such as 8K 60Hz, or 4K 240 Hz.

Looking at the RX 5700 Series specifications above, AMD clearly includes 3 (three) GPU clockspeed ratings: Base, Game, and Boost.

Although it may seem confusing to users, this is not unusual to find on post-2012 modern GPUs, which generally have a clever mechanism to adjust the GPU clockspeed speed depending on the circumstances. In short, this mechanism generally changes GPU Clockspeed dynamically based on a particular variable of GPU that is monitored by various sensors (for example: TDP / Power, GPU Temperature, and also the load conditions of certain applications).

NVIDIA GPU Boost and AMD PowerTune are a form of implementation of the above mechanism.

In the Navi RX 5700 series, AMD states that:

    Base Clock is a clockspeed that operates on unusually heavy loads (such as Furmark stress tests)
    Game Clock is a clockspeed that is commonly achieved in general gaming loads
    Boost Clock is the maximum clockspeed rating / peak that can be operated on the GPU (before entering the realm of cl overclocking ’)

As an additional note, AMD mentions the value of 'Boost Clock' is referred to as 'up to', where the fabrication variation factor will determine the maximum boost clock value of each GPU.

Each chip fabrication process will always be affected by a small variation, which results in a chip with a slightly different potential speed. On GPUs, what generally happens is finding GPUs with 'good' chips with low voltage ratings, while 'bad' chips must be operated with relatively high voltages.

For a brief example, see our experiment in finding variations of operating voltage on the following Radeon RX 480 'Polaris':

As you can see in our experiment, our 'worst' RX RX GPU chip is rated at 1150 mV, while the 'best' GPU chip is rated at 1031 mV. This difference of several hundred mV seems small, but it has the potential to give a slight operational clockspeed difference.

When the GPU operates under certain conditions (heavy load for example), GPUs that require high voltage will certainly trigger the "Power Limit" of the card more frequently, making the clockspeed on average lower, and result in a slight difference in performance.

It can't be guessed how far the difference in performance is due to variations in GPU Boost Clock on the RX 5700 series later, but looking at previous GPUs, generally the difference in performance that occurs is very small (less than 1-2%), sometimes it can be seen on benchmarks synthetic, but not very influential on the gaming experience.