Massassi Forums Logo

This is the static archive of the Massassi Forums. The forums are closed indefinitely. Thanks for all the memories!

You can also download Super Old Archived Message Boards from when Massassi first started.

"View" counts are as of the day the forums were archived, and will no longer increase.

ForumsDiscussion Forum → Q&A with my new rig - Update!!
12
Q&A with my new rig - Update!!
2012-01-13, 5:19 PM #41
Originally posted by Cool Matty:
It has nothing to do with tuning. Game performance is 95% video card, especially at higher resolutions.
Let me elaborate on this.

All games fundamentally do two things: they update the simulation (CPU only) and they render the environment (some CPU, but mostly GPU.) In order for a game to look convincing to the human eye you need to render it 30 times a second or more, so a powerful GPU is really important. On the other hand, you really don't need to update the simulation very often. Depending on the game you could be pumping out 20 frames for every simulation timestep.

We can then model a game with the following (greatly simplified) equation:

frame_cycles = (physics_cycles_per_second / fps) + render_setup_cycles_per_frame + (gpu_stall_time_per_frame * clock_frequency)

which eventually gives us:

fps = (clock_frequency - physics_cycles_per_second) / (render_setup_cycles_per_frame + (gpu_stall_time_per_frame * clock_frequency))

A little bit of real analysis shows us that, as clock frequency approaches infinity, fps monotonically converges on (1 / gpu_stall_time_per_frame.) In other words, the relationship between performance and game FPS is not linear, the degree to which this relationship is not linear depends on a relatively huge number of factors, and arbitrarily large increases in actual performance can be manifested as negligibly small increases in FPS.

That's why FPS is broadly a bad benchmark. It's also not representative of actual workloads (most of the CPU's time is spent idle, inside a context switch or waiting for the GPU to empty a buffer) so it's extra bad.
2012-01-13, 7:02 PM #42
CM, I bit my tongue on your last comment about the facts being the facts and that I was just being ignorant. I was in the wrong, considering all the references.

But GFX cards are not 95% of performance in games. I'd say it's closer to 80, at most. Required? Yes. But so is any vga input. My Dual-core Pentium has issues running TF2 on LOW settings (stupid hats), and it's not even ENJOYABLE to play Bad company 2.

Pop in a quad-core, and BAM! It runs WAY better.

Every extra core increases the playability of those games by quite a bit. Putting a 6970 into my old mobo and dual-core would not have yielded any better framerates for either game. I've upgraded/replaced thrice with different manufacturers and cards on that system, and the performance is the same.

Although I'm not sure how negligible a 6(true 6-core I mean) versus an 8-core would be.
"Staring into the wall does NOT count as benchmarking."


-Emon
2012-01-13, 7:33 PM #43
Originally posted by lightside:
But GFX cards are not 95% of performance in games. I'd say it's closer to 80, at most. Required? Yes. But so is any vga input. My Dual-core Pentium has issues running TF2 on LOW settings (stupid hats), and it's not even ENJOYABLE to play Bad company 2.

Pop in a quad-core, and BAM! It runs WAY better.


Uhm, no, that's not how it works. I have said Core 2 Duo, and I play TF2 on max settings. Extra cores don't really help TF2 much at all. In fact, on release, it wasn't even multithreaded. That was added later on. It still has trouble threading beyond 2 cores. Also, if I was able to play TF2 on a AMD Athlon 64 3000+ at low settings, there's no reason you couldn't. Thats an ancient, single core CPU, mind.
Quote:
Every extra core increases the playability of those games by quite a bit. Putting a 6970 into my old mobo and dual-core would not have yielded any better framerates for either game. I've upgraded/replaced thrice with different manufacturers and cards on that system, and the performance is the same.


Except that has absolutely nothing to do with cores, and everything to do with general performance of the CPU. In many games, especially TF2, raw single core CPU power will run circles around a slower multi-core CPU. It is possible to make a game, especially an older one, CPU limited with a stronger video card. But it doesn't take much to reverse that. You can make any game CPU limited by setting the resolution to 640x480. Thing is, by the time it gets to that point, you're almost always looking at 100+ fps, which is just beyond silly.

You seem to think that more cores automatically improves performance significantly across the board. Maybe in a couple decades that'll be closer to the truth, but it sure isn't right now. Only a few specific applications even come close to linear scaling with more cores.

Long story short, unless you're pounding out 1080p footage every day, tons of cores is not going to do much for you. That's part of the reason Intel is in no hurry to release a 6/8 core Sandy Bridge architecture chip. There's just no demand for it. Quad core is only barely used now, as is.
2012-01-13, 7:51 PM #44
this Q&A with my new rig is turing ugly
COUCHMAN IS BACK BABY
2012-01-13, 7:51 PM #45
Originally posted by lightside:
But GFX cards are not 95% of performance in games. I'd say it's closer to 80, at most.
It's anywhere between 0 and 100%. The percentage is variable because it increases as framerate increases.

We can 'guess' sometimes. The first benchmark I found: For L4D2, using the i5-2500 and i5-2300 for comparison (because they are extremely similar) I estimate the GPU needs roughly 2.8e-3 seconds per frame - that's the gpu_stall_time_per_frame variable in my model. On the i5-2500 it works out to 68% of the CPU time. N.b. L4D2 is not representative of typical games; Valve games are widely considered to be CPU bound, so this figure is abnormally low.

Quote:
Although I'm not sure how negligible a 6(true 6-core I mean) versus an 8-core would be.
Probably not any major differences, even assuming the game is written to use this hardware.

There's this thing called Amdahl's law, which states that, for n cores, every program has some proportion that can be executed in parallel (p) and some proportion that must be executed sequentially (s).

S(n) = 1 / (s + (p/n))

Note that lim n->infty S(n) = 1/s.

This means, as you increase the number of processors to infinity, the performance is capped at however long it takes for one processor to execute the sequential portion.

What does this mean for games? The gpu_stall_time_per_frame variable is a sequential portion of the program. That means, even if a game is written to use arbitrarily many cores, your performance will still converge to the same value.
2012-01-13, 7:55 PM #46
Originally posted by Tracer:
this Q&A with my new rig is turing ugly


They all get that way eventually.
>>untie shoes
2012-01-13, 7:57 PM #47
Originally posted by Tracer:
turing ugly
lol I get it
2012-01-13, 8:19 PM #48
whoops, little typo there

NOT INTENTIONAL
COUCHMAN IS BACK BABY
2012-01-13, 8:22 PM #49
more like it's turing gay hey emirite
2012-01-13, 8:23 PM #50
Far Cry 2: ~84%
Starcraft 2: ~61%
F1 2010: ~63%
H.A.W.X. 2: ~58%
Metro 2033: ~47%
R.U.S.E.: ~63%

All from the same CPU review (Xbit.) Unfortunately it's hard to find a CPU review that doesn't artificially lower graphics settings to inflate the variation in CPU performance.
2012-01-13, 8:37 PM #51
That's percentage GPU bound? I'm a little surprised by the Metro 2033 number, thats widely regarded as one of the most punishing GPU games. Would hardware PhysX support throw off your calculations?
My favorite JKDF2 h4x:
EAH XMAS v2
MANIPULATOR GUN
EAH SMOOTH SNIPER
2012-01-13, 8:57 PM #52
Originally posted by EAH_TRISCUIT:
That's percentage GPU bound? I'm a little surprised by the Metro 2033 number, thats widely regarded as one of the most punishing GPU games. Would hardware PhysX support throw off your calculations?
That's the approximate percentage of time a Core i5-2500 spends arguing with the GPU, yeah.

The actual benchmark shows way too much variation for a game that's strongly GPU-bound. Low resolution, no AA, and I have to assume they were using software PhysX based on all of this.

Edit: Basically all I'm doing is calculating the amount of time per frame that can't be explained by clock rate. That's why the CPUs have to be so similar. The only thing that would really throw it off is if AMAT had a major effect on IPC between the two.
2012-01-13, 10:11 PM #53
Originally posted by lightside:
My Dual-core Pentium has issues running TF2 on LOW settings (stupid hats)


lolwut? my last PC (AthlonXP 3000+, 1GB PC2700, 7800GS) ran it extremely well on maxed out settings
eat right, exercise, die anyway
2012-01-13, 10:27 PM #54
Quote:
You seem to think that more cores automatically improves performance significantly across the board. Maybe in a couple decades that'll be closer to the truth, but it sure isn't right now. Only a few specific applications even come close to linear scaling with more cores.


Mainly just a quad-core over all else. Like I said, not sure about 6 - 8-cores having that much of an impact. Try GTA4 with a dual-core vs. quad.

And yeah, my old cpu was a Pentium D. :huh:

Before the hats and other crap came out, TF2 ran fine on high. Dunno.


Anywho- yeah, this thread is getting nowhere fast.

How about we close it to give more attention to camhoez?

:awesome:
"Staring into the wall does NOT count as benchmarking."


-Emon
2012-01-14, 9:25 AM #55
but i love Q&A with my new rig
COUCHMAN IS BACK BABY
2012-01-14, 10:02 AM #56
lol

Well, for anyone wondering about my GPU, it IS a 1gb Radeon HD 5770.... but not a regular one. It's a low-profile single-slot solution.
I believe they moved a conductor for better heat management with the low-profile version.

I guess I'll fraps GTA4 on full settings- currently what I play the game at. Love getting in firefights with the fuzz, then running up to a cop and slicing him up like a pizza.
"Staring into the wall does NOT count as benchmarking."


-Emon
2012-01-14, 10:22 AM #57
Q&A with my new rig is one of my favorite massassi segments, too.

It's almost as good as He Went.
>>untie shoes
2012-01-14, 3:36 PM #58
Originally posted by Antony:
Q&A with my new rig is one of my favorite massassi segments, too.


I'm glad I always title my build threads 'Shameless Bragging', rather than 'Q&A'. 'Q&A' invites questions. :)
My favorite JKDF2 h4x:
EAH XMAS v2
MANIPULATOR GUN
EAH SMOOTH SNIPER
2012-01-14, 5:29 PM #59
Yeah, but we all know that you know your **** when it comes to building PCs. Any questions could be responded to by saying "do you remember who you're talking to?"
>>untie shoes
2012-01-14, 6:10 PM #60
Hey, at least I still come to this place! I've been checkin' in for over 10 years! I went with bulldozer cuz I got it for a good (in my book) price, and it was new tech. I chose poorly, I know. But I'm not disappointed with it. If I wanted to brag, I would have bought an i7 with max memory and water-cooling. XD


But in all seriousness, I do appreciate the feedback, no matter how positive or negative. At least some of you are still here and paying attention.
"Staring into the wall does NOT count as benchmarking."


-Emon
2012-01-14, 6:22 PM #61
Well done! I look foreward to the next episode of Q&A With My New Rig.
COUCHMAN IS BACK BABY
2012-01-14, 6:31 PM #62
My question for this Q&A is: Do we actually have to have a new rig to post a Q&A with my new rig?
>>untie shoes
2012-01-14, 6:55 PM #63
Q&A with my rig I built almost 6 years ago and made like four threads about

Yeah, I went with an Athlon 3000+ a few months before dual cores got popular, but it runs games made before 2008 alright so it's a good processor. Also I don't know why you guys recommend more than one gig of RAM, my computer boots up with just one gig so it's good enough. Don't worry about asking any questions, I just answered them.
I had a blog. It sucked.
12

↑ Up to the top!