Moore’s law was originally stated in the 1965 paper by Gordon E. Moore. In effect it states that the density of transistors on an integrated circuit will increase 100% every 2 to 3 years. What does all that mean? Well, to understand the significance you have to understand that the transistor was the greatest technological leap in computer technology to date. At the time computers were large devices that filled whole floors of buildings. The physical relays had to be manually inspected to find burned out vacuum tubes and in one memorable case and actual moth which had shorted out a relay switch. This first “bug” in the program was indicative of the feeble nature of these machines. These computers worked with moving parts that were prone to all the same failures of a physical system in addition to those of an electrical system. These computers were also expelling a great deal of heat which meant that if they worked well for a long period of time they would overheat the floor where they were stored.
Why is Moore’s law applicable to transistors only?
Transistors, however, didn’t have moving parts. This was solid state technology where changes in logical connections were made by purely electrical means. The lack of moving parts made the operation of the transistor circuit both faster and less prone to malfunction. Short of being physically broken the circuit could be depended on to remain in the same physical condition. What is more, rather than waiting for metal pieces to move the speed of the electrons in the wires was only limited by the terms of relativity. In terms of speed and dependability, the transistor was light years ahead of the state of the art at that time. An integrated circuit computer could be built much smaller than a vacuum tube machine of similar ability. Although at this stage they still filled rooms and even entire floors they were no longer great tunnels of heated bulbs. Instead they were quiet boxes which simply ran.
Why, then, should the density of transistors be an issue? The basic logic of any transistor circuit is expressed in terms of basic logical functions AND, OR and NOT. As the names indicate these are logical checks that two inputs are both true (AND), one is true (OR) and a conversion of true to false or false to true (NOT). Each of these logical gates requires a number of transistors to create, so the number of transistors which can be placed on a circuit is a direct limit on the number of logical checks which the circuit can make. Add to this the fact that even limited distance between transistors increases the time between computations and we arrive at a second limit on the speed of a logical circuit. In all the higher the density of transistors on a circuit the faster and the more information it can process.
From the time when Moore stated his famous law the history of transistor density and the speed of computing has more or less followed his prediction. Comparing computers from the 1970s with those in the 1980s is unfair because Moore’s law correctly states that in a decade computers are 8 times faster. A very tangible example of this fact is that in the 1970s it was inconceivable that a computer which filled entire floors of some office buildings and required special cooling devices could be used for personal work at home. Yet, in the 1980s we saw the emergence of the PC, Apple and the first Windows system. Comparing the technology from 1980s such as the original Nintendo with that of the 1990s Nintendo 64 is equally unfair. Even today one can buy a computer and expect that in 3-6 years it will be relatively “low end” for the market.
Will Moore’s law hold true in the future?
This “law” has held true for the last four decades because of the original state of the art and the effort of the industry to outpace itself. As time has gone by, however, the integrated circuit has developed an increasingly difficult to overcome barrier to greater density: heat. As electrons travel through the integrated circuit they create heat in much the same way that they create heat when traveling through the coils of your electric stove. Although much smaller in scale and size the density of the circuits and the speed at which the transactions take place create a heat source not dissimilar to that stove. In an effort to create a heat sink, or in other word redirect the heat away from the circuit, the industry has created more complex and innovative cooling mechanisms. By the new millennium, however, the cooling mechanisms had become as large or larger than the circuits they were cooling. Obviously this trend could not continue. The computer would soon be getting larger again, just to hold the cooling system.
In the face of such an overwhelming obstacle in the laws of physics some have begun to look at multi-core processing. While Moore’s law still applies and can be applied to a variety of circuit boards in the computer, the CPU or central processing unit is where the worst heat effects are found. A multi-core computer actually contains separate and distinct CPU circuits which function at a lower and thus less heat-producing rate. The advantage, however, in having multiple cores comes in the multiplying effect of their work. Rather than pay for an expensive CPU which can calculate 3 GHz of information (roughly 3 billion logical checks per second) a multi-core machine can use 3 less expensive chips which calculate at only 1.1 GHz. The multi-core machine is calculating 3.3 billion checks in the same time – a 10% increase -without the need for very expensive hardware or cooling devices. This strategy benefits further from the fact that by using second, but not low-grade cores, in large quantities the effect can be hugely increased. This appears to be the wave of future expansion in computing.
In the near future Moore’s law will continue to work as other, less intensely used circuits can benefit from further density. There is, however, a physical limit and we are approaching it exponentially. When that limit is reached, Moore’s law may no longer be relevant. In practical terms, the use of multi-core systems may continue to increase the rapidly accelerating speed of computers. By the time we reach the physical limit of Moore’s law, new technologies may replace the transistor in the same way it replaced the vacuum tube. For the time being, however, the largest challenge to computing will be to learn effective techniques to make use of the new multi-core systems without losing anything in accuracy and reliability.