Julian's Science Experiments
  • Famous Experiments and Inventions
  • The Scientific Method
  • Home Computer Experiments Computer Science Fair Projects Computer Jokes Warning!
       

    CPU (Central Processing Unit)
    K-12 Experiments and Background Information
    For Science Labs, Lesson Plans, Class Activities & Science Fair Projects
    For Elementary, Middle and High School Students and Teachers







    CPU Experiments

    CPU Background Information

    Definition

    The Central Processing Unit (CPU) is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions.

    Basics

    A CPU (Central Processing Unit) is an important part of every computer. The CPU is like the brain; its job is to carry out instructions and calculations.

    The CPU is an electronic machine that works on a list of things to do. It reads the list, one item at a time, doing each instruction in order. A list of instructions that a CPU can read is a computer program. A machine that can perform the job of a CPU is often called a Turing machine by mathematicians.

    Here are some of the basic things a CPU can do:

    • Add two numbers together
    • Test to see if one number is larger than another
    • Move a number from one place to another
    • Get a number from memory
    • Jump to another place in the instruction list

    Even very complicated programs can be made by combining many simple instructions like these. This is possible because each instruction takes a very small time to happen. Many CPUs today can do more than 1 billion instructions in a single second. In general, the more a CPU can do in a given time, the faster it is. One way to measure a processor's speed is MIPS. Flops and CPU clock speed (usually measured in gigahertz) are also ways to measure how much work a processor can do in a certain time.

    A CPU is built out of logic gates; it has no moving parts. The CPU of a computer is connected electronically to other parts of the computer, like the video card, or the BIOS. A computer program can control these peripherals by reading or writing numbers to special places in the computer's memory.

    Topics of Interest

    The Central Processing Unit (CPU) or the processor is the portion of a computer system that carries out the instructions of a computer program, and is the primary element carrying out the computer's functions. This term has been in use in the computer industry at least since the early 1960s. The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same.

    Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.

    History: EDVAC, one of the first electronic stored program computers.Computers such as the ENIAC had to be physically rewired in order to perform different tasks, these machines are thus often referred to as "fixed-program computers." Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.

    While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.

    As a digital device, a CPU is limited to a set of discrete states, and requires some kind of switching elements to differentiate between and change states. Prior to commercial development of the transistor, electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements. Although these had distinct speed advantages over earlier, purely mechanical designs, they were unreliable for various reasons. For example, building direct current sequential logic circuits out of relays requires additional hardware to cope with the problem of contact bounce. While vacuum tubes do not suffer from contact bounce, they must heat up before becoming fully operational, and they eventually cease to function due to slow contamination of their cathodes that occurs in the course of normal operation. If a tube's vacuum seal leaks, as sometimes happens, cathode contamination is accelerated. Usually, when a tube failed, the CPU would have to be diagnosed to locate the failed component so it could be replaced. Therefore, early electronic (vacuum tube based) computers were generally faster but less reliable than electromechanical (relay based) computers.

    Discrete transistor and Integrated Circuit CPUs: CPU, core memory, and external bus interface of a DEC PDP-8/I. made of medium-scale integrated circuitsThe design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements like vacuum tubes and electrical relays. With this improvement more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components.

    Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. Thanks to both the increased reliability as well as the dramatically increased speed of the switching elements (which were almost exclusively transistors by this time), CPU clock rates in the tens of megahertz were obtained during this period. Additionally while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like SIMD (Single Instruction Multiple Data) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc.

    The introduction of the microprocessor in the 1970s significantly affected the design and implementation of CPUs. Since the introduction of the first commercially available microprocessor (the Intel 4004) in 1970 and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term "CPU" is now applied almost exclusively to microprocessors.

    Operation: The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.

    CPU design focuses on these areas:

    • Datapaths (such as ALUs and pipelines)
    • Control unit: logic which controls the datapaths
    • Memory components such as register files, caches
    • Clock circuitry such as clock drivers, PLLs, clock distribution networks
    • Pad transceiver circuitry
    • Logic gate cell library which is used to implement the logic

    ,b>Clock rate: Most CPUs, and indeed most sequential logic devices, are synchronous in nature. That is, they are designed and operate on assumptions about a synchronization signal. This signal, known as a clock signal, usually takes the form of a periodic square wave. By calculating the maximum time that electrical signals can move in various branches of a CPU's many circuits, the designers can select an appropriate period for the clock signal.

    Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors.

    The performance or speed of a processor depends on e.g. the clock rate and the Instructions Per Clock (IPC), which together are the factors for the Instructions Per Second (IPS) that the CPU can perform. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, various standardized tests such as SPECint have been developed to attempt to measure the real effective performance in commonly used applications.

    Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors (called cores in this sense) into one integrated circuit. Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, however, the performance gain is far less, only about fifty percent, due to, e.g. imperfect software algorithms and implementation.

    A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.

    CPU time (or CPU usage, process time) is the amount of time for which a central processing unit (CPU) was used for processing instructions of a computer program, as opposed to, for example, waiting for input/output operations. The CPU time is often measured in clock ticks or as a percentage of the CPU capacity. It is used as a point of comparison for CPU usage of a program.

    Central processing unit power dissipation or CPU power dissipation is the process in which central processing units (CPUs) consume electrical energy, and dissipate this energy by both the action of the switching devices contained in the CPU, such as transistors or vacuum tubes, and via the energy lost in the form of heat due to the impedance of the electronic circuits. Designing CPUs that perform these tasks efficiently without overheating is a major consideration in nearly all CPU manufacturers to date.

    Source: Wikipedia (All text is available under the terms of the GNU Free Documentation License and Creative Commons Attribution-ShareAlike License.)

    Useful Links
    Computer Science and Engineering Science Fair Projects and Experiments
    General Science Fair Project Resources
    Electronics & Computer Project Books

                  





    My Dog Kelly

    Follow Us On:
         

    Privacy Policy - Site Map - About Us - Letters to the Editor

    Comments and inquiries could be addressed to:
    webmaster@julianTrubin.com


    Last updated: June 2013
    Copyright © 2003-2013 Julian Rubin