Monday, January 19, 2009

Sandia: "More chip cores can mean slower supercomputing"


For those of us who always thought that more processors are always better, Sandia just released the following press release saying that more may not really be so great after all.

16 multicores perform barely as well as two for complex applications

ALBUQUERQUE, N.M. — The worldwide attempt to increase the speed of supercomputers merely by increasing the number of processor cores on individual chips unexpectedly worsens performance for many complex applications, Sandia simulations have found.

A Sandia team simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.

The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. (The memory bus is the set of wires used to carry memory addresses and data to and from the system RAM.)...


Definitely very intersting. I've wondered in the past how much more processor cores really do help. So how do we get faster computers if multiple cores arn't the answer? It seems like the chip speeds have just about leveled out for the last couple of years. What's the fastest chip on the market today? You'd think that by now people would have 10 GHz processors in computers. As of right now, the fasted processor speeds that I've seen are around 3GHz. The fastest Mac Pro available comes with an 8 core processor operating at speeds of around 3.2GHz. The fastest Dell that I see is a quad 3.4GHz. So where do we get more speed in our computing? Have we reached the speed limit or are there faster computers in the waiting somewhere?

No comments:

Post a Comment