Previous | Next --- Slide 2 of 79
Back to Lecture Thumbnails
yey1

Single-instruction stream means a single uni-core processor executes a single instruction stream to operate a data stored in a single memory (from Wiki).

tcm

@yey1: I think that you are talking about the "SISD" definition (as opposed to SIMD, MIMD, and MISD, from Flynn's taxonomy). One thing that I would point out is that we may still talk about single instruction streams within a parallel machine. Aside from exploiting SIMD parallelism, the performance of individual instruction streams has not improved much in recent years. (Perhaps I should have said "individual" rather than "single".)

Split_Personality_Computer

We keep mentioning in class the "power wall" about how we can't make CPUs any faster because the heat starts to cause them to melt. Would this imply the next big 'thing' for computers will be nicer cooling systems / more durable materials? Fans do seem rather primitive!

One thing I've often wondered is why chips are always so two-dimensional. Has there ever been any talk about 3D chips, where say the GPU would be on top of the CPU so that they could communicate vertically, while talking to other components horizontally? Wouldn't this solve the problem of heating, while still keeping the chips close enough to communicate efficiently?

We also mentioned in class that this trend of making CPU cores a little bit slower so that you can have multiple cores work in parallel is becoming very popular. I was wondering if single-core chips (that run faster than a single core on multi-core chips) are still popular? Do any supercomputers still keep a few faster single-core chips on hand for any computations that might be very serial?

TJ

1) Why has single-instruction-stream performance only improved very slowly in recent years?

  • We cannot speed up the program simply by adding more transistors because the heat problem.

2) What prevented us from obtaining maximum speedup from the parallel programs we wrote last time?

  • Multiple threads are doing co-related tasks, thus the communication overhead we had prevents us from getting max speedup.
jmackama

Somewhat on the same topic as Split_Personality_Computer, I heard that some chip manufacturers are trying to "cheat" the power wall by actually pushing past the stable thermal density of the chip and cycling between cores to give heat time to dissipate in other areas, and also to allow short bursts of peak usage. I was wondering how this might effect the programmer's side of parallelism. Is this something that is or will be baked into hardware? Handled by threading libraries or the operating system? Will programmers need to have some knowledge of the system architecture and specifically cater to each device's thermal situation?

petrichor

@Split_Personality_Computer, actually building more 3D structures into chips is one active area of research and I think some of this may be starting to find its way into commercial chips through the use of Through-silicon via, which basically does what you suggest which gives a higher speed interconnect between for example a CPU and integrated GPU. However the power dissipation issue gets worse not better! This is because heat dissipation is a function of the ratio of surface that can dissipate the heat to the volume that is producing this heat, so by adding another chip on top of an existing chip you effectively double the volume producing heat while the surface area dissipating this heat effectively remains the same. This surface area to volume ratio problem is why passive heat sinks have so many ridges.