PDA

View Full Version : Intel vs AMD? I don't think so ...



Crazy Buddhist
06-21-2008, 02:26 PM
This thread (http://www.thebestcasescenario.com/forum/showthread.php?t=14571&page=2) about Nvidia's total integration of AGEIA got me thinking about the strange relationships in the computer market and where things are going in the next year or three. So I started googling .... and anyone who knows me knows that usually takes me places ... soo ...

We have Nvidia who make GPU's and MOBO chipsets and now have their own Physics operation too. We have Intel who as we all know make CPU's, chipsets and are increasingly integrating video capabilities into their chipsets. They also own HavoK who are the leading Physics company/opposition to AGEIA. And we have AMD who make CPU's but appear about 18 months behind Intel right now and own ATI who make GPU's and are actually quite good, if only they would wait and release products with working drivers. And ... have just signed a deal with HavoK (yes the Intel subsidiary) to provide full integration of their 3d Physics into both AMD CPU's and ATI GPU's.

And then we have the technologies ... and the main one here is "stream processing" which is releasing the huge computational power of GPU's for more general computational tasks. CUDA, the new programming language from NVIDIA which is an extension to C allows programmers to directly address the GPU. Intel are implementing this through DX10 which will allow direct addressing of the GPU by coders. And AMD are clearly going to implement the HavoK Physics now siding with Intel. .....

... and I started thinking about these inter-relationships and where we are heading and I decided we are heading to the CPU and the GPU merging. The CPU('s) could then directly harness the stream (parallel) processing power of the GPU's without all the bottlenecks of passing through external buses and controllers thus maximising the flexibility of the total processing power without lots of complicated driver and programming layers intervening.

AMD Own ATI, no problems in an integrated scenario.
intel: Some chipsets have increasingly sophisticated graphics + they have the HavoK guys to help out.
Nvidia: Bit screwed here. Can't team up with AMD and INTEL wouldn't want them. Or are they? Because a little digging later I found this (http://www.tgdaily.com/content/view/37729/135/):

"For GPU computing, future Tegra parts (Tegra 2 is scheduled for 2009, Tegra 3 for 2010) will inevitably include SATA support, which would allow users to put a SSD array directly onto a Tesla GPGPU card and eliminate the need for a motherboard and Intel/AMD CPUs. Indeed, applications of this "jack of all trades" chip with a ridiculously small TDP can deeply impact the already existing desktop, notebook and HPC markets."

"What is Tegra?

To clear out some confusion, let us first stress that Tegra is not a CPU. Neither is a GPU or a combination of both with one part dominating the other.

Instead, Tegra is a “system-on-a-chip” (SoC) or “computer-on-a-chip” (CoC). Tegra consists of an ARM11 CPU core, a GoForce (renamed into GeForce ULV) GPU, an image processor (digital camera support), a HD video processor (PureVideo for handhelds), memory (NAND Flash, Mobile DDR), a northbridge (memory controller, display output, HDMI+HDCP, security engine) and a southbridge (USB OTG, UART, external memory card SPI SDIO, etc).

In short, Tegra includes the whole shebang: CPU, graphics and what you traditionally find on a motherboard are squeezed onto a single silicon die. What is particularly impressive about this device is the fact that this chip measures just 144 mm2, which is smaller than a dime and about one quarter the size of the upcoming GeForce graphics chip, which measures 576 mm2, according to our sources."

Who makes Tegra?

"Nvidia’s Tegra hits expands barriers of market segments and puts the company within reach of a new market that currently has a demand of more than 1 billion processors per year. It could be a game-changing move for Nvidia - not just in terms of growth opportunity."

My conclusion is that in the end it will be Nvidia vs AMD and Intel.

Thoughts on a postcard ....

Crazy

Trace
06-22-2008, 02:24 AM
Interesting.
If that is so, nVidia is fighting a losing battle. The two top chip makers against a new person? They don't stand a chance.

crenn
06-22-2008, 02:44 AM
nVidia is far from losing. We've seen interesting things happen in the past. Remember the days Intel made better CPUs than AMD... and then AMD made their chips superior?

Each one is targeting different markets. AMD are trying to do a "all-in-one" processor which is great in theory. Intel are trying to broaden their markets by going into MIDs and the graphics cards market. For now, just watch, things will be getting very interesting soon.

Crazy Buddhist
06-22-2008, 03:06 AM
The times will indeed be interesting Crenn. And Trace, I have a feeling Nvidia is ahead of both Intel and AMD on the silicon manufacturing front ....

... and given that the silicon manufacturing lead is where Intel are beating the pants off AMD ... should Nvidia dive in in their birthday suit to make a big splash, IMHO, they would beat the pants off both CPU makers. This I think because they have a better grasp of the sea change that is occurring in PC architecture.

Crazy

crenn
06-22-2008, 03:16 AM
On the silicon manufacturing front, Intel and AMD are leading with 45nm ;)

NVidia is beginning to move to 55nm (See 9800GTX+) and all their cards should follow soon.

Crazy Buddhist
06-22-2008, 03:21 AM
AMD are what 18 months behind Intel on 45nm?

crenn
06-22-2008, 03:28 AM
It's meant to be phased in this year. I know Intel are moving to 35nm next year (after finishing their new CPU lineup)

Crazy Buddhist
06-22-2008, 03:31 AM
Except ... Intel have delayed some 45nm releases until 2009 because they realise AMD are not competition at the moment. They seem to have delayed the 45nm releases so they don't cannibalise their own 65nm offerings ... which WOULD be their competition.

crenn
06-22-2008, 03:35 AM
The E8xxx and Q9xxxx series are all 45nm.

Drum Thumper
06-22-2008, 02:25 PM
Very interesting Crazy, thanks for the info. I'm curious as well how this is going to pan out, especially with VIA introducing x86 architecture chips earlier this year.

Crazy Buddhist
06-22-2008, 02:57 PM
Very interesting Crazy, thanks for the info. I'm curious as well how this is going to pan out, especially with VIA introducing x86 architecture chips earlier this year.

Thanks Drum. Me too. This bit particularly caught my eye:


What is particularly impressive about this device is the fact that this chip measures just 144 mm2, which is smaller than a dime and about one quarter the size of the upcoming GeForce graphics chip, which measures 576 mm2, according to our sources."

If that chip was scaled up only to the GPU size it would have about three times as many transistors on it. Which would make it about 5 times more powerful than it is. Now it already does everything a not so powerful computer does.

Add in three more CPU's and bunches of stream processors for the Physics and computational tasks then factor in the next two or three years of silicon development - and you easily have a Crysis playing capable computer based on a single chip about 2.5 cm squared.

You'd need external RAM and HDD's/Opticals and a teeny M/board (probably dictated by the backpanel size rather than whats on the board) - with minimal associated circuitry.

And a damn big cooler.

Crazy

Quakken
06-22-2008, 06:39 PM
That may be pushing the thermodynamic limits of air cooling. Water would have to used, and at that it could be a push unless there are some serious architecture changes in the near future in nvidia RandD.

Crazy.

crenn
06-22-2008, 06:54 PM
One thing that can be done with all silicon chips is to have integrated 'cooling channels' which helps cool the chip a lot. It's quite interesting the research being done (I believe it was IBM who did it to look at cooling server chips).

EDIT: http://forums.vr-zone.com/showpost.php?p=5383871&postcount=825

Drum Thumper
06-22-2008, 07:19 PM
One thing that can be done with all silicon chips is to have integrated 'cooling channels' which helps cool the chip a lot. It's quite interesting the research being done (I believe it was IBM who did it to look at cooling server chips).

EDIT: http://forums.vr-zone.com/showpost.php?p=5383871&postcount=825

And don't forget, Intel (I think anyways), is planning on perforating chips in the future to increase cooling performance. There's a thread somewhere in the Tech forum here about it.

Quakken
06-27-2008, 08:08 PM
oh hohohoho! touche!

Perforated chips to include cooling would be B.A. Cooling channels also? I want a perforated chip with cooling channels. Only then will I be happy. Looks like I will be waiting.

did I really need the touche? no. But it's a cool word.

Airbozo
06-27-2008, 09:43 PM
Not sure how this will play out, but I do know that the guy running nVidia is a huge egomaniac. HE spends money and resources on things to suit HIS ambitions, not necessarily the companies.

During a lawsuit many years ago, nVidia was being sued by sgi for stealing IP and specifically the current nVidia gpu technology. Turns out a comment he made in court sunk his case. It went something like: "I am making more money off the IP than sgi, so I should own it...". nVidia lost and ended up having to pay millions to sgi, _and_ still has to license the technology from sgi for many years even if they do not use it. Sgi also got the pick of quadro's off the manufacturing line ensuring that they had the fastest ones on the market. The quadro deal only lasted like 5 years.


Are you sure about Via's x86? I thought that only AMD held the license to use the x86 technology (besides Intel).

Drum Thumper
06-27-2008, 11:33 PM
@AirBozo:

http://www.via.com.tw/en/resources/pressroom/pressrelease.jsp?press_release_no=1827