Tom Phoenix said:
Who said I was talking solely about the GPU? We need a standard set of system specifications (meaning, CPU, GPU, RAM...the whole shebang) that hardware manufacturers and game developers would follow for a certain number of years before switching over to a new standard.
Plus, the fact that all GPUs are DirectX compliant means jack squat. There are plenty of graphics cards that support a specific DirectX version and there are even more potential CPUs, RAMs, motherboards etc. they can be combined with, potentially resulting in technical issues and certainly resulting in the confusion of less technically-savvy customers.
The fact is, if we want PC gaming to be competitive with console gaming, there is little choice but to make purchasing and using PC hardware and software as easy or nearly as easy as purchasing and using console hardware and software. It won't be easy if undertaken, but it has to be done if PC gaming is to have a future.
So if the game runs like crap, lower the settings. That's how it has always been. And all video cards are backward compatible.
The main problem is people don't know how to take care of their computers. They corrupt their systems, then they can't install a game properly, and they get upset when they can't fix it.
Patrunic said:
TBH, most people on here claiming that a CPU / GPU combination is a positive thing for performance, are quite honestly wrong on so many levels. Look at it this way, A CPU's architecture is designed to maximise the efficiency of the CPU alone, if you attach a GPU onto that then all of a sudden arises the issue of bottlenecks for your whole system if you need to process graphics output and every other process through the same device.
So in order to get a decent level of performance (I'm defining decent as 40+ FPS) You would need to increase the CPU from its original level (Lets say an i7 950 for example) of 3.06ghz up to 4.36ghz in order to ensure there was no drop in performance due to the GPU running at 1.30ghz.
As already mentioned, the heat that would be generated from the voltage required to keep this stable would be astronomical with drastically different heat dispersion methods being required, as standard HSF systems simply cannot deal with these temperatures.
While this idea of a Combined CPU/GPU idea provides a cheaper and simpler alternative to PCI-E cards, the current technology will not allow it to be competitive in the PC gaming industry due to the level of graphics nowadays, so while the console experience could be had from this CPU/GPU combo, it would be that of many years past and not a current day experience.
It just sounds like you have no idea what you're talking about. Intel already integrated the GPU with the CPU in the previous generation, and it was fine. It ran like crap for games, but it was fine.
The first issue you mentioned actually did happen. They moved the QPI around to accommodate for traffic going back and forth between parts, which slowed it down. But now it is all designed as one part with a ring buffer designed for massive scaling of processing cores, so the QPI is back in place and now up to full speed, and the graphics are much faster as well.
Having the GPU sitting on-die with the processor is actually a huge advantage. That means there is virtually no travel time required for one to speak with the other. No latency = faster. That is how the Intel GPU is as fast as it is now. It's still kind of slow, but the GPU part is actually very diminutive (read: not going to melt down your processor), and if it were back on another part of the motherboard then it would take a drastic speed dive. Lots of stuff have been moving to on-die with the CPU for speed and efficiency.