CPU power dissipation is the process in which central processing units (CPUs) consume electrical energy, and dissipate this energy both by the action of the switching devices contained in the CPU (such as transistors or vacuum tubes) and by the energy lost in the form of heat due to the impedance of the electronic circuits. Designing CPUs that perform these tasks efficiently without overheating is a major consideration of nearly all CPU manufacturers to date.
Some implementations of CPUs use very little power, for example, the CPUs in mobile phones often use just a few hundred milliwatts of electricity. Some Microcontrollers, used in embedded systems may use a few milliwatts. In comparison, CPUs in general purpose personal computers, such as desktops and laptops, dissipate significantly more power because of their higher complexity and speed. These microelectronic CPUs may consume power in the order of a few watts to hundreds of watts. Historically, early CPUs implemented with vacuum tubes consumed power on the order of many kilowatts.
CPUs for desktop computers typically use a significant portion of the power consumed by the computer. Other major uses include fast video cards, which contain graphics processing units, and the power supply. In laptops, the LCD back light also uses a significant portion of overall power. While energy-saving features have been instituted in personal computers for when they are idle, the overall consumption of today’s high-performance CPUs is considerable. This is in strong contrast with the much lower energy consumption of CPUs designed for low-power environments. One such CPU, the Intel XScale, can run at 600 MHz with only half a watt of power, whereas x86 PC processors from Intel in the same performance bracket consume roughly eighty times as much energy.
There are some engineering reasons for this pattern. For a given device, operating at a higher clock rate always requires more power. Reducing the clock rate of the microprocessor through power management when possible reduces energy consumption. New features generally require more transistors, each of which uses power. Turning unused areas off saves energy, such as through clock gating. As a processor model’s design matures, smaller transistors, lower-voltage structures, and design experience may reduce energy consumption.
In many applications, the CPU and other components are idle much of the time, so idle power contributes significantly to overall system power usage. When the CPU uses power management features to reduce energy use, other components, such as the motherboard and chipset take up a larger proportion of the computers energy. In applications where the computer is often heavily loaded, such as scientific computing, performance per watt – how much computing the CPU does per unit of energy becomes more significant.
For years, processor makers consistently delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification. Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded or multi-process manner to take full advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed vs number of processors. This is particularly true while accessing shared or dependent resources, due to lock contention. This effect becomes more noticeable as the number of processors increases. Recently, IBM has been exploring ways to distribute computing power more efficiently by mimicking the distributional properties of the human brain.



Leave a comment