Previous | Next --- Slide 24 of 40
Back to Lecture Thumbnails
yuel1

As Professor Kayvon mentioned in class, single core performance is still improving, just not at the rate it used to.

Berry

I recently saw a video on how exploiting the cache structure and optimizing your code for it without coming up with complex algorithms can yield jaw-dropping results (source: http://www.infoq.com/presentations/LMAX). Whilst the problem of sharing data on the L3 cache has a parallel feel to it I still believe the emphasis was on knowing the hardware instead of awesome parallel algorithms that confuse the cache into thrashing. Am I wrong here?

vnganesh

Its also worth mentioning that while single processors are becoming faster, they are also really expensive. In some cases it might be cheaper to buy a larger number of slower core processors rather than a few expensive ones. This will of course depend on what the budget is and how much speedup you can get with more cores.

subliminal

Just an FYI, since I don't think this was mentioned explicitly in class: While transistor sizes are still somewhat following Moore's law (as discussed), the reason performance isn't proportionately increasing is attributed to the breakdown of what is known as Dennard Scaling: http://en.wikipedia.org/wiki/Dennard_scaling

kayvonf

@subliminal: I'm not close to an expert on device-level issues, but the Wikipedia treatment of Dennard scaling is a little imprecise. At a very high level, Dennard scaling tells us that as a transistor shrinks, so does its required operating voltage and current. The result is that each transistor requires less power and, as a result, the power density of transistors (power consumed per unit area of chip real estate) stays roughly constant as technology improves and transistors shrink. In other words, Dennard scaling says that if today you have a chip with T transistors, and in the future you can pack 2T transistors into a chip with the same area, that new chip will still consume roughly the same amount as power as the current one. (This is a good thing. In a multi-core world, that means you could have two cores in the future for the energy cost of one core today.)

The "breakdown of Dennard scaling" is referring to the fact that we're quickly approaching a realm where we can no longer reduce transistor power consumption proportionally with its size (one example is that voltages are now nearing threshold voltages, the minimum voltages required to operate the transistor.) As a result, the power density may begin to go up.

So even though technology development is still proceeding in a manner that we can cram more and more transistors onto a chip of fixed size, in the future we'll reach a point were can't use them all at the same time because using them will require too much power. You'll hear a controversial term thrown around these days called "Dark Silicon", referring to the fact that future chips may not be able to use all their transistors at the same time.

The issue is addressed in this article: Power Challenges May End the Multicore Era

Also, some of you may be interested in Stanford's CPU DB Project, which is the subject of this ACM Queue article: CPU DB: Recording Microprocessor History

subliminal

I see. Thanks Professor Kayvon! Incidentally, this was the original article I had read on this topic, I quite liked it: http://www.edn.com/design/systems-design/4368705/The-future-of-computers--Part-1-Multicore-and-the-Memory-Wall