Log in

View Full Version : AMD "Fusion" chips expected around Q3/2011



Konrad
09-09-2010, 07:49 AM
AMD is developing their new Fusion family of processors.

I'll admit I don't know a whole lot about these. Mostly just what I've read (or downloaded) from AMD (http://sites.amd.com/us/fusion/APU/Pages/fusion.aspx) and Wikipedia (http://en.wikipedia.org/wiki/AMD_Fusion). I've read plenty of other stuff, but most of the new material seems to really be rumours and speculation.

In a nutshell, it looks like the desktop version ("Bulldozer" or "Llano" I think) will have four cores, while the laptop ("Bobcat") will have two cores. The difference is that each of these APU cores will actually be a K10 CPU and an ATI AMD DX11 GPU working in tandem. They'll require a mobo with a new "AM3+" socket, it seems unlikely they'll be at all compatible with existing AM3 sockets.

My immediate thought was "wow, crazy fps rates!", my next thought was "wow, crazier fps rates with multi PCIe cards!", then "wow, teh killer pwnage fps rates with these new cores also used on those multi PCIe cards!", and finally "how will these affect things like GPGPU physics processing where stream processors already have a big advantage?". Whoa ... normal and stream processors in the same package sharing bandwidth through the same cache at full on-die speeds (and therefore consuming less bandwidth across the NB-PCIe bus).

I wonder if OSs and software (aside from games) will actually be able to take advantage of this. How much of a change will we see running existing stuff, like Windows OS?

Anyone got any good info or links?

Drum Thumper
09-09-2010, 09:47 AM
Interesting tech, for sure, but other than games, media editing/creation, CAD/CAM programs, etc etc etc, I, for one, doubt that there will be much adaptation in the computing world to this. How many other times have we seen 'the next big thing' fizzle out?

Konrad
09-09-2010, 10:03 AM
I'm really gunning on this one, though. My logic is that raw CPU has been overkill for years (how many GHz do you really need to run a word processor or browser?) while GPU functions have become bottlenecks (look at post-XP Windows bloat). Combining both parts into the main processor package should be huge, no more bandwidth chokepoints as signals get bussed all over the mobo.

And like I sez above ... It'll be real purty! Lookit the advantages in GPGPU, physics rendering, and all that!

Some serious volcano heat though. Hope that surface area fits into AM socket dimensions. Overclock gains might really suck without extreme (LN2-ish) cooling.

x88x
09-09-2010, 07:59 PM
I haven't seen much about this yet, but I think it's more targeted towards low-end applications, kinda like Intel's Westmere stuff. Makes MBBs cheaper since MBB manufacturers only have to put on the hookups to interface with the on-die GPU, not buy an actual GPU to have integrated graphics. All a play to lower the entry-level price point and another step towards a SoC. I'm expecting something along the lines of a 54xx-level GPU, considering it'll probably share the normal system RAM with the CPU, and it'll definitely share a power/thermal envelop with the CPU, and (according to the wiki page anyways) the Bulldozers are gonna top out at 100W power consumption. ..which is lower than some of the top-end CPUs now, so with a GPU in there as well it's gotta be pretty light-weight.

Twigsoffury
09-14-2010, 06:54 PM
I'm really gunning on this one, though. My logic is that raw CPU has been overkill for years

Should check out the trial for Battleground Europe

3.4GHz quad core phenom II's and i7 series processors lag on that game. die hard players regularly overclock there processors as fast as they can go for that game to get acceptable frame rates.

400+ units (players) all shooting in the same neighboorhood while a dozen or two fighter planes and bombers fight and bomb overhead and mow the infantry and tanks on the ground can require some serious processing power.

x88x
09-14-2010, 07:21 PM
Should check out the trial for Battleground Europe

3.4GHz quad core phenom II's and i7 series processors lag on that game. die hard players regularly overclock there processors as fast as they can go for that game to get acceptable frame rates.

400+ units (players) all shooting in the same neighboorhood while a dozen or two fighter planes and bombers fight and bomb overhead and mow the infantry and tanks on the ground can require some serious processing power.

Sounds like somebody should have invested in a GPGPU physics engine for their game. :P

Konrad
09-15-2010, 03:11 AM
@x88x

Hardcore gamers buy physics cards?

Or do you mean the developers should have licensed a software physics engine?

x88x
09-15-2010, 03:19 AM
Or do you mean the developers should have licensed a software physics engine?

This.

And actually, btw, since nVidia bought the PhysX stuff, they just ported it all to CUDA, so any nVidia GPU 8xxx and later supports PhysX. Though, I would really like to see a company develop a good physics engine in OpenCL, that way AMD cards can get in on the love. :D

Konrad
09-15-2010, 04:00 AM
As I understand it, AMD cards are already designed with some Havok optimization. nVidia most likely has begun doing the same. Likewise, Havok has begun working on optimizing their code to take advantage of the quirks and features of these GPU architectures. I don't think they're really into OpenGL because their focus is on developing their own (semi-proprietary) APIs.

We use Havok-based CADs at work (rarely, for modeling insanely complex metallic fluid/gas chemistries). Our purposes (and supporting code/data, libraries, etc) are completely unconcerned with the sorts of (visual) effects that are pushed in gaming.

x88x
09-15-2010, 12:33 PM
Hmm, that would be cool if a widely used engine like Havok does support both architectures.


I don't think they're really into OpenGL because their focus is on developing their own (semi-proprietary) APIs.

OpenCL (http://en.wikipedia.org/wiki/OpenCL), no OpenGL (http://en.wikipedia.org/wiki/OpenGL). Same group, but OpenCL is a cross-architecture GPGPU engine. You make a good point though. If the company's entire purpose is making such software, they can afford to spend the considerable additional time and effort to write lower level code optimized for each architecture.