Challenge for the ECE folks in the class: Can you give us some intuition as to the relationship between voltage and frequency? For example, why must voltage be increased in order to make transistors switch at higher frequencies?
Question for anyone: What is dynamic voltage scaling (or dynamic frequency scaling) in modern processors?
I believe dynamic voltage scaling is a technique to manage, often save, power. It involves increasing or decreasing the voltage depending on the purpose, lowering it to reduce power consumption or raising it to increase performance.
CPUs are composed of transistors so they have a bit of capacitance to them. In order to increase frequency, we need to saturate the capacitance portion faster, or the processor will be unstable. The only way to make the capacitors react faster is by applying a larger voltage, which increases power and heat to the point where it is unreasonable to use.
I think in a power constrained environment, you can throttle the Energy per Instruction (EPI) according to parallelism possible. For example if a critical section of the program is highly sequential we can increase the power (voltage/frequency) and if a section is highly vectorizable decrease it.
Besides what Calloc has mentioned, decreasing the voltage can save power indirectly, because less heat will be generated as the voltage goes down and we can turn on the cooling units less frequently.
@hofstee @doodooloo Is there anyway to make the capacitors react faster while still applying a larger voltage? I understand that in order to keep it cool we need to keep the voltage low, but what other ways can we increase frequency without increasing the voltage to the point of unreasonable use? Is this purely a hardware issue?
@bdebebe: in prior years, decrease the size of the transistor. Now we're hitting issues with [quantum] physics more than before since the transistors are getting so small. The Instructors Note on the previous slide talks about this a bit.
dynamic voltage/frequency scaling: CPU schedulers dynamically set the frequency, and in order to keep it stable, they have to adjust the voltage as well. Some schedulers fix the frequency at maximum ('performance') and only decrease it when the processor becomes too hot. Some scale aggressively, some scale not so aggressively. But it's all about increasing the frequency only when it's needed and keeping it low at other times to conserve power or cool down.
Turbo-boost is an aspect of this, where the CPU's frequency multiplier is set much higher than the TDP value (which I think isn't the "average" power consumed as the professor said but rather the value reached when the CPU is at it's maximum non-turbo clock, which is also the value that the CPU is guaranteed to sustain with default settings). For example, if an Intel CPU is sold with a TDP of 82W and a non-turbo maximum clock of 3.5GHz, and it comes with a CPU cooler, then you have a guarantee from Intel that the CPU can stay at 3.5GHz (using 82W) without ever throttling given that you use attached the CPU cooler (properly), meaning that this cooler is guaranteed to be able to dissipate at least 82W. Turbo means that the CPU is fairly cool already, and so is able to run at a higher frequency for a small duration of time when running something particularly computationally intensive, before throttling back to cool down. If your system has additional resources (your PSU can supply more power than 82W and your cooler can dissapate more than 82W), then it may be pushed even further than the Turbo-boost frequency (overclocking, only available for unlocked CPUs on motherbooards that support it).
I found a post about the effect of processor with high temperature. Heat will not only burn out the chip immediately but also decrease the processors' life time and cause its components fail to work properly. The short life of processors result from the impurities get more mobile and start to diffuse as the temperature getting higher. Another reason given in the post is about the broken connection within the processors. Additionally, since some semiconductors work in low leakage current and the fact that leakage current grows exponentially with temperature, they might malfunction in high heat.
When compared with AMD processors, Intel's are much more power efficient. As a result, AMD's aren't the most optimized for laptops since battery performance is a large factor. Thus, AMD's in laptops are fairly weak in comparison to compensate for battery performance. I was wondering what is the difference between the architecture of AMD and Intel's processors that allows Intel to surpass AMD in certain categories.