Originally posted by Cool Matty:
Let me elaborate on this.It has nothing to do with tuning. Game performance is 95% video card, especially at higher resolutions.
All games fundamentally do two things: they update the simulation (CPU only) and they render the environment (some CPU, but mostly GPU.) In order for a game to look convincing to the human eye you need to render it 30 times a second or more, so a powerful GPU is really important. On the other hand, you really don't need to update the simulation very often. Depending on the game you could be pumping out 20 frames for every simulation timestep.
We can then model a game with the following (greatly simplified) equation:
frame_cycles = (physics_cycles_per_second / fps) + render_setup_cycles_per_frame + (gpu_stall_time_per_frame * clock_frequency)
which eventually gives us:
fps = (clock_frequency - physics_cycles_per_second) / (render_setup_cycles_per_frame + (gpu_stall_time_per_frame * clock_frequency))
A little bit of real analysis shows us that, as clock frequency approaches infinity, fps monotonically converges on (1 / gpu_stall_time_per_frame.) In other words, the relationship between performance and game FPS is not linear, the degree to which this relationship is not linear depends on a relatively huge number of factors, and arbitrarily large increases in actual performance can be manifested as negligibly small increases in FPS.
That's why FPS is broadly a bad benchmark. It's also not representative of actual workloads (most of the CPU's time is spent idle, inside a context switch or waiting for the GPU to empty a buffer) so it's extra bad.