Previous | Next --- Slide 35 of 43
Back to Lecture Thumbnails
Instructor Note

Here's the full writeup on how this graph was produced:


Question: What does this graph suggest about the reason why it's difficult to continue to make processors faster by increasing their clock frequency?


The graph shows that increasing the clockspeed of a processor consumes more power. In other words, it creates more heat. At this point we've hit a threshold where increasing a processor's clock frequency would require so much power that it would actually start to melt the CPU.


From the graph, it looks like CPU power consumption to clockspeed is increasing at exponential rate, which is bad. Also I assume as modern devices get smaller and thinner, cooling becomes more challenging.


I agree with the earlier comments about how an exponential increase in power consumption can lead to issues with cooling. To add on, from a usability perspective, excessive power consumption is prohibitive in devices such as smartphones where battery life is already limited.


@sbyeon mentioned a good point about the cooling problem. But as far as this graph is concerned, I don't think power consumption is really exponential in clockspeed, from Eq(2) in this further reading, it seems that frequency is approximately linear in voltage (vice versa). Can someone give a more detailed explanation about this point?


@haboric the previous slide addresses that question in more detail.


@hofstee I see that in the comment below the previous slide, you gave a nice intuitive explanation on the relationship between voltage and frequency. What I am asking here is the relationship between voltage and frequency in a more quantitative sense. Does voltage increase exponentially (or quadratically or linearly or ...) in frequency? Because this relationship will give the relationship between power and frequency according to the proportional relationship mentioned in this slide.


@haboric the two are not directly related. The frequency is sort of limited by the capacitance of the transistors, they need to be fully saturated before the next clock edge if you want to have stable CPU operation. So to decrease the time needed to saturate (the limiting factor for increasing frequency), we need to increase voltage.

In circuits dealing with frequencies as high and circuit elements as small as CPUs, there are a large number of other factors that make it very difficult to give a definite answer. "Roughly linear" is probably a reasonable ballpark answer, though it may vary with different hardware implementations. The easiest way to figure this out would probably be to look at power vs. frequency graphs, and use the dynamic power consumption formula to solve for voltage.


Intel Turbo Boost is an example of dynamic control of the processor's clock rate. When the operating system requests highest performance state of the processor, Turbo Boost is activated. Turbo Boost monitors the current usage of a Core i5 or i7 processor to determine how close the processor is to the maximum thermal design power(TDP). TDP is the maximum amount of power the processor is allowed to use. When the processor operates well within the limit, Turbo Boost kicks in and overclocks the processor continuously by steps of 133MHZ to the maximum Turbo speed.

Also, Turbo Boost provides a compensation for low parallelism. For example, If a game only uses one core of a four-core processor, Turbo Boost will shut down other three cores to get a higher Turbo speed. However, it's not always wise to turn on Turbo Boost. Especially if you are playing COD using a laptop with poor heat radiation because your laptop will always be working at the maximum TDP.


If you are on a laptop with poor heat radiation you will likely experience throttling, as the GPU or CPU will underclock itself to limit performance to stay within Tj. Max. (Maximum allowable temperature at the junctions of the CPU)


Something else to note is since power is more than linear with frequency, many designs will aim to "race to idle" by setting the clock speed higher, and having a longer idle state, instead of keeping a lower clock speed for a longer time. Generally, this saves power.


@hotstee. Good point! "Race to idle" (a.k.a. "race to sleep") is a common technique in the design of graphics chips since their work is typically done on a per frame basis. For example, a GPU has 1/60 of a second (16 ms) to render a frame. In practice designs tend to run a peak performance in order to complete rendering as quickly as possible (e.g., in 10ms), allowing the chip to be put in a low power state for the remaining part of the frame.

In practice this is more effective than trying to keep the chip at a medium performance state and trying to just get done in the allotted time. For example, its tough -- and potentially quite fragile -- to predict how slow you can run and still get done with the next frame in time. Race to idle means you'll never make the mistake of having the ability to complete the frame in time, but mistakenly not do so, due to power management.


This graph lead to one of the main reasons why parallelism is becoming rampant. If you think of frequency as very roughly performance or (ops/sec) and we also take into account that power and heat are the critical issues with increasing performance in modern processors. We can then see that if we take a core and replace it with two cores at half the frequency/performance. We can get the same total performance while decreasing the total power. Allowing us to save on the current most critical resource of power and heat.