5

I'm thinking of upgrading my computer from an AMD Phenom II X4 955BE with a AMD Radeon HD 6800 Graphics card (not integrated) to using a Intel Core i7 3770. As I have no knowledge of integrated graphics, my question is, what happens to the computing power when not using the HD4000 integrated graphics of the CPU (does it mean the CPU will run faster then it would if I relied on it?)

Also what is better the CPUs inbuilt HD4000 or my Radeon Graphics card?

I am mostly interested in terms of content creation: Using Adobe After Effects, 3D Rendering etc. Not too bothered about gaming performance. I will be using the spare parts of this build and older systems to make a second computer for network renders so what will be the advantages of keeping the Radeon with the current system for that?

Dave
  • 25,297
  • 10
  • 57
  • 69
Matt
  • 69
  • 1
  • 1
  • 3
  • Here are some benchmarks: http://reviews.cnet.com/8301-33642_7-57418061-292/at-long-last-a-credible-3d-gaming-chip-from-intel/ – BJ292 May 05 '12 at 18:32
  • Just wanted to let you all know I am running dual monitor, monitor 1 (DVI) on Geforce 460GTX second monitor (HDMI) on intel 4000HD on Asus P8Z77-V LK My 460GTX is running 10C cooler in single display mode :-) So yes you can run it like having 2 discrete cards I only tend to have video on the 4000HD and games exclusive to the GTX460. I hope this helps, I had to try because I couldn't find any definite info on this type of setup. My IGP supports 3 monitors on the Ivy Bridge not 2 as stated. –  Oct 18 '12 at 17:35
  • I think the fact that your question is basically "What's faster, an Intel HD4000 or a Radeon HD 6800?" makes this question off-topic for Super User, as this question is unlikely to be relevant in the future. You can find many benchmarks on various websites comparing the two graphics cards under various scenarios, including in the Adobe suite. IMHO, if the HD4000 is fast *enough* for your needs (and truly, only you can determine that, so why are you asking us?), then I would stick with that; while adding a discrete GPU *will* improve rendering performance, it will probably consume more power. – Breakthrough Jul 15 '13 at 15:02

4 Answers4

7

Just to make sure you know... you won't just be replacing the processor. You'd be replacing the motherboard as well. No, don't get insulted. I'm not assuming that you didn't know this. You didn't mention it, so I did.

What happens when you put a dedicated video card in a system with an integrated video card? Well... let's put it this way. Let's assume you choose Windows. You install Windows on a computer with an integrated video chipset/cpugpu. You install the drivers and get everything working nicely. You then decide to add a dedicated video card. So, you turn off the computer, you plug the card in the slot, you turn it on. What happens? Depending on the settings in the BIOS, either the card is automatically detected as the new primary, or it is ignored as the primary display device during the Boot sequence (since the onboard video was previously the primary). However, the BIOS won't automatically disable the onboard. Once you are back in Windows, you will now have two video adapters available.

You can go into the BIOS and disable the onboard video, which will free up any shared video memory. Right there is the main performance difference. The onboard video chipset will borrow some of your system Ram and make it unavailable to the rest of the system. Using a Dedicated video card will enable you to free that Ram up for the system.

It is up for debate whether the integrated GPU steals CPU/processing power. On the one hand, empirical evidence gathered while watching a CPU meter says it does, yet there is no documentation or benchmarks to back that up. As has been said, the only difference there between older integrated chipset motherboards and these new Intel/AMD chips is that the GPU is built into the processor, instead of the motherboard. The Processor remains as powerful as it is, regardless of whether you use the integrated video.

Which is better? Your Radeon 6800. By Far.

For what you do, having as much Ram as possible is always a good thing... so not losing any to Shared Ram is a good thing. Also, you'd see better performance in your 3D rendering program (not necessarily the speed of the rendering, but the display while you are working) with the Radeon 6800.

Bon Gart
  • 12,950
  • 1
  • 25
  • 36
  • The integrated video most certainly does steal CPU/processing power. In fact, one of the main benefits of a video card (over embedded graphics) is that many tasks previously done by the CPU can now be done by the GPU. The HD4000 is the first Intel integrated graphics solution to provide any significant offloading from the CPU with DXVA. The previous generations made the CPU do pretty much everything. (Also, the main issue with shared RAM is not the loss of the RAM but the loss of the RAM *bandwidth* as the CPU and GPU place loads on the memory controller.) – David Schwartz May 05 '12 at 15:37
  • 3
    I can't find any benchmarks that back that up. Every benchmark I can find for a processor that includes an integrated GPU does not also provide a benchmark for when that GPU is disabled. – Bon Gart May 05 '12 at 15:49
  • You're saying you don't believe that a high-end video card offloads more tasks from the CPU than an integrated graphics solution does? Many integrated graphics solutions from Intel [left *all* transform and lighting to the CPU](http://www.intel.com/support/graphics/sb/cs-011910.htm). – David Schwartz May 05 '12 at 16:31
  • 1
    I'm saying I can't find anyone who has tested this by first benching a CPU with an integrated GPU, and then disabling it and benching it again... and then published their results. – Bon Gart May 05 '12 at 16:52
  • I can't find any such benchmarks either. The biggest difference would be seen when doing graphics-intensive operations, as they demand the most of the CPU with integrated graphics solutions. But then it would be hard to separate the benefits of the faster GPU from the lower CPU load. I suppose the best test would be to lock the frame rate the same and measure CPU load with an integrated GPU and with a discrete GPU. But the common-sense argument is that people with integrated GPUs always find their CPUs maxed when they're getting low frame rates. – David Schwartz May 05 '12 at 17:15
  • Common sense also tells me that something this significant would have been documented. Intel has had GPU on the CPU since they came out with the 2nd gen processors... and NO ONE in all the benchmarks that have been done brings up the computing power lost when one uses the GPU? There. Fixed the answer for you. – Bon Gart May 05 '12 at 17:22
  • @BonGart is this: http://www.legitreviews.com/article/934/11/ what you are looking for? With the dedicated GPU disabled, CPU usage is 100% and with it enabled its in the 10's and 20's. Granted its an Atom, but you need a low performance processor to show the difference – Akash May 05 '12 at 17:44
  • see the note at the bottom about CUDA as well – Akash May 05 '12 at 17:46
  • @Akash it's close, but the same video chipset was used in both cases... hardware acceleration vs. software acceleration. What about CPU benchmarks where you disable the IGP and use a dedicated card vs. the IGP? Say... both tests using Hardware acceleration? – Bon Gart May 05 '12 at 18:03
  • @BonGart, well, the IGP supports fewer video codecs so will be switching to software rendering more often as compared to a dedicated GPU. – Akash May 05 '12 at 18:10
  • I'll restate my opinion: an IGP supports a fewer number of features in hardware than a dedicated GPU. This leads to the unsupported features executing on the CPU when the IGP is used, thus reducing CPU performance. As you can see in this graph: http://www.elitebastards.com/hanners/games/batman-arkham/batman-1920.png the performance impact of enabling PhysX is much more on the ATI card than the NVIDIA card because the ATI card offloads the PhysX processing to the CPU http://www.elitebastards.com/index.php?option=com_content&view=article&id=842&Itemid=27&limitstart=3 – Akash May 05 '12 at 18:13
  • 1
    Again, you are showing examples where the differences come from specifically giving the CPU more work. It's not what I am saying. What I am saying is there is no documentation that supports a CPU running faster when the IGP has been disabled and replaced with a dedicated GPU. – Bon Gart May 05 '12 at 18:21
  • 2
    Assuming that the work is supported by the IGP, any difference in performance comes from the increased heat production restricting the CPU from going into full turbo mode. http://www.intel.com/Assets/PDF/whitepaper/323324.pdf Ref fig 1 page 7 onwards. It shows how the load on IGP affects the CPU performance. – Akash May 05 '12 at 18:50
  • @Akash just note that that "power wall" is only going to change drastically depending on the CPU itself (and it's rated TDP/power consumption), and the CPU's operating conditions (esp. voltage/frequency). While I agree that under extreme conditions (read: overclocked) it's probably possible to hit the TDP limit and force the chip to throttle either the CPU or IGP, it's unlikely to happen at any stock speeds/voltages. But indeed, for any "serious" system, this is something I would at *least* take into consideration. – Breakthrough Jul 15 '13 at 15:14
2

Intel's HD 4000 and AMD's HD 6550D/6530D/6410D are basically the same as any other integrated graphics processor. The only difference with these is that they're on the CPU rather than part of the chipset. So the same limitations apply.

If you install a discrete graphics card, then the IGP might or might not be disabled (Depending on your motherboard and bios settings). Your CPU/APU might run cooler, which in itself might make it run faster (or allow it to be overclocked higher), and draw less power. But the CPU won't be able to make use of the GPU cores to do CPU calculations.

If your computer speeds up, it'll primarily be because:

  • your dedicated video card is much faster than what an IGP can offer
  • you've added a bunch of additional dedicated video memory so that all of your main system memory can be freed up for the CPU

That's why most people see IGP, whether on the CPU/APU or chipset, as a money-saving solution for casual gamers/computer users rather than a performance enhancement for serious gamers or power users. Unlike a CPU's internal SIMD extensions (which are part of the CPU's own instruction set), GPUs can't be easily used to offload CPU work. Even GPGPUs need to have programs specifically written to take advantage of them. And as of yet, I don't think there are any technologies to even use an IGP + discrete video the way you can use dual video cards to power 2 monitors (though Ivy Bridge by itself supports 3 displays, which is listed on Intel's website)

Lèse majesté
  • 3,239
  • 1
  • 19
  • 23
1

There may or may not be a speedup

Really depends on your cooling setup.

Basically, not using IGP means that the processor is generating less heat and thus is more likely to reach the maximum turbo speeds.

Now, if you already have a good cooling setup which allows it to reach and sustain max turbo for extended periods, there wont be a difference, however, if your cooling setup is insufficient to deal with max turbo loads while the IGP is working, you will see an increase in performance

Akash
  • 3,740
  • 1
  • 19
  • 34
1

I'm surprised that nobody has mentioned that some Z77 motherboard include a feature called LucidLogix Virtu MVP. This component combines the power of the discrete video card with that of the i-GPU (onboard). It uses the i-GPU processing power to offload some graphic tasks from the discrete video card, and thus improving performance. I haven't tested that myself, but it has been reviewed by Bjorn3D http://www.bjorn3d.com/2012/10/asus-maximus-v-extreme/ Looks like a nice added value which comes with high-end motherboards (I have the Asus P8Z77-V). But, as always with such exotic technologies, you gotta look for the cases where they make things worse instead of improving them. In the above review it can be seen in the Metro benchmark minimum FPS results. This is not cool, as this is an example of a game which should be benefiting from any performance increase it could get (as it brings most video cards to their knees). I'm going to test it anyway.

TameU
  • 11
  • 1