A video card, or graphics card, is an expansion card whose function is to generate and output images to a display.
See also 3D Computer Graphics
A video card (also called a graphics card or a graphics accelerator), in computing, is a special circuit board that controls what is shown on a computer monitor and calculates 3D images and graphics.
A video card can handle two types of video images. First, they can be used to display a two-dimensional (2D) image like a Windows desktop, or a three-dimensional (3D) image like a computer game. Computer-Assisted Drawing (CAD) programs are often used by architects and designers to create 3D models on their computers. If a computer has a very fast video card, the architect can create very detailed 3D models.
Many computers have a basic video and graphics capabilities built-in to the computer's motherboard. These "onboard" video chips are not as fast as regular graphics cards, but they are fast enough for basic computer use and even some basic computer games. If a computer user wants faster and more detailed graphics, a video card can be installed.
Video cards have their own processor (called a Graphics Processing Unit or GPU), that is separate from the main computer processor (called the Central Processing Unit or CPU). The CPU's job is to process all the calculations needed to make the computer function. The GPU's job is to handle 3D graphics calculations so the CPU doesn't have to. 3D graphics calculations take a lot of CPU power, so having a video card to handle the graphics calculations lets the CPU focus on other things like running computer programs.
Video cards also have their own memory, separate from the main computer memory. It is usually much faster than main computer memory, too. This helps the GPU do its graphics calculations even faster. Most video cards also allow more than one monitor to be plugged in at one time. This lets the computer user use more than one monitor at once. Graphics manufacturers nVidia and ATI have special technologies that allow two identical cards to be linked together in a single computer for much faster performance. nVidia calls their technology SLI and ATI calls their technology CrossFire. Some modern graphics cards can even process physics calculations to create even more realistic-looking 3D worlds.
Video cards typically connect to a motherboard using the Peripheral Component Interconnect (PCI), the Advanced Graphics Port (AGP) or the Peripheral Component Interconnect Express (PCI Express or PCI-E). PCI-E is the newest and fastest connection; most (if not all) new video cards and motherboards have this connection. Before PCI-E was used, AGP was the standard connection for video cards. Before AGP, video cards were designed for PCI (sometimes called "regular" PCI).
History: In early computing years, graphics processing was very basic and could be done by the CPU along with all the other processing. However, as computer games advanced and started using 3D graphics, the CPU had too much to do and CPU-makers couldn't keep up on making them faster. Eventually, video cards were invented to solve this problem. Video cards are designed to have their own processor called the Graphics Processing Unit or GPU. This lets the CPU do more work since it doesn't have to spend any time on advanced graphics calculations; it can simply pass these calculations off to the GPU to be done.
The first video cards connected to the motherboard via the PCI connection. PCI is still considered the standard connection in motherboards for most add-on cards. The first popular video cards were manufactured by companies like 3dfx, ATI, and Matrox. Throughout the years, the importance of video cards has grown. As they evolved, a new connection standard was developed called Advanced Graphics Port (AGP). This was the first motherboard connection designed exclusively for video cards. It was much faster at transferring information between the video card and the rest of the computer. Eventually, the AGP connection became outdated, and a new connection, called PCI Express (PCI-E), became the standard for video cards. Most video cards manufactured today use PCI-E to connect to the motherboard.
Topics of Interest
A video card, video adapter, graphics-accelerator card, display adapter or graphics card is an expansion card whose function is to generate and output images to a display. Many video cards offer added functions, such as accelerated rendering of 3D scenes and 2D graphics, video capture, TV-tuner adapter, MPEG-2 and MPEG-4 decoding, FireWire, light pen, TV output, or the ability to connect multiple monitors (multi-monitor), while other modern high performance cards are used for more graphically demanding purposes such as PC games.
Video hardware can be integrated on the motherboard, as it often happened with early computers; in this configuration it was sometimes referred to as a video controller or graphics controller.
The first IBM PC video card, which was released with the first IBM PC, was developed by IBM in 1981. The MDA (Monochrome Display Adapter) could only work in text mode representing 80 columns and 25 lines (80x25) in the screen. It had a 4KB video memory and just one color.
Starting with the MDA in 1981, several video cards were released, which are summarized in the attached table.
VGA was widely accepted in 1987, which led some corporations such as ATI, Cirrus Logic and S3 to work with that video card, improving its resolution and the number of colours it used. This developed into the SVGA (Super VGA) standard, which reached 2 MB of video memory and a resolution of 1024x768 at 256 color mode.
In 1995 the first consumer 2D/3D cards were released, developed by Matrox, Creative, S3, ATI and others. These video cards followed the SVGA standard, but incorporated 3D functions.
From 2002 onwards, the video card market came to be dominated almost entirely by the competition between ATI and Nvidia, with their Radeon and Geforce lines respectively, taking around 90% of the independent graphics card market between them, while other manufacturers were forced into much smaller, niche markets.
Components: A modern video card consists of a printed circuit board on which the components are mounted. These include:
Graphics processing unit (GPU): A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically to perform floating-point calculations, which are fundamental to 3D graphics rendering. The main attributes of the GPU are the core clock frequency, which typically ranges from 250 MHz to 4 GHz and the number of pipelines (vertex and fragment shaders), which translate a 3D image characterized by vertices and lines into a 2D image formed by pixels.
The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video card's operations and provides the instructions that allow the computer and software to interact with the card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked settings for higher performance), although this is typically only done by video card overclockers and has the potential to irreversibly damage the card.
Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs.
The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as CRT displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphic card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker. (With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCDs, plasma displays and TVs work in the digital domain and do not require a RAMDAC. There are few remaining legacy LCD and plasma displays that feature analog inputs (VGA, component, SCART etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion.
Cooling devices: Video cards may use a lot of electricity, which is converted into heat. If the heat isn't dissipated, the video card could overheat and be damaged. Cooling devices are incorporated to transfer the heat elsewhere. Three types of cooling devices are commonly used on video cards:
- Heat sink: a heat sink is a passive-cooling device. It conducts heat away from the graphics card's core, or memory, by using a heat-conductive metal (most commonly aluminum or copper); sometimes in combination with heat pipes. It uses air (most common), or in extreme cooling situations, water, to remove the heat from the card. When air is used, a fan is often used to increase cooling effectiveness.
- Computer fan: an example of an active-cooling part. It is usually used with a heat sink. Due to the moving parts, a fan requires maintenance and possible replacement. The fan speed or actual fan can be changed for more efficient or quieter cooling.
- Water block: a water block is a heat sink suited to use water instead of air. It is mounted on the graphics processor and has a hollow inside. Water is pumped through the water block, transferring the heat into the water, which is then usually cooled in a radiator. This is the most effective cooling solution without extreme modification.
Power demand: As the processing power of video cards has increased, so has their demand for electrical power. Present fast video cards tend to consume a great deal of power. While CPU and power supply makers have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video card may be the biggest electricity user in a computer. Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 W. Nowadays, video cards with a power consumption over 75 watts usually include a combination of six-pin (75W) or eight-pin (150W) sockets that connect directly to the power supply to supplement power.
Source: Wikipedia (All text is available under the terms of the GNU Free Documentation License and Creative Commons Attribution-ShareAlike License.)