Pages

Saturday, July 17, 2010

memory







Mushkin 1GB 184-Pin DDR SDRAM Memory for Apple Desktop

Mushkin 1GB 184-Pin DDR SDRAM Memory for Apple Desktop


Type: 184-Pin DDR SDRAM Compatibility: PowerMac G5 1.6GHz (M9020LL/A) PowerMac G5 1.8GHz (M9031LL/A) PowerMac G5 1.8GHz (M9555LL/A) PowerMac G5 1.8GHz Dual (M9454LL/A) PowerMac G5 2GHz (M9032LL/A) PowerMac G5 2GHz SuperDrive (M9455LL/A) PowerMac G5 2.3GHz (M9748LL/A) PowerMac G5 2.5GHz Dual SuperDrive (M9457LL/A) PowerMac G5 2.7GHz (M9749LL/A) Voltage: 2.5V Heat Spreader: No Parts: Lifetime limited Labor: Lifetime limited




















G.SKILL 1GB 200-Pin DDR2 SO-DIMM Memory For Apple Notebook

G.SKILL 1GB 200-Pin DDR2 SO-DIMM Memory For Apple Notebook


Type: 200-Pin DDR2 SO-DIMM Compatibility: For Apple Notebook Cas Latency: 5 Timing: 5-5-5-15 Voltage: 1.8V Heat Spreader: No Parts: Lifetime limited Labor: Lifetime limited




















CORSAIR 2GB 200-Pin DDR2 SO-DIMM Memory For Apple Notebook

CORSAIR 2GB 200-Pin DDR2 SO-DIMM Memory For Apple Notebook


Type: 200-Pin DDR2 SO-DIMM Compatibility: For Apple Notebook Cas Latency: 5 Timing: 5-5-5-15 Heat Spreader: No Specifications: Compatible with all Intel-based Apple MacBook, MacBook Pro, iMac (17",20",24") and MacMini systems Parts: Lifetime limited Labor: Lifetime limited















Programming Blogs - BlogCatalog Blog Directory

Wednesday, June 23, 2010

POWER SUPPLY



Computer power supply

Power supplies, often referred to as "switching power supplies", use switcher technology to convert the AC input to lower DC voltages. The typical voltages supplied are:

* 3.3 volts
* 5 volts
* 12 volts


The 3.3- and 5-volts are typically used by digital circuits, while the 12-volt is used to run motors in disk drives and fans. The main specification of a power supply is in watts. A watt is the product of the voltage in volts and the current in amperes or amps. If you have been around PCs for many years, you probably remember that the original PCs had large red toggle switches that had a good bit of heft to them. When you turned the PC on or off, you knew you were doing it. These switches actually controlled the flow of 120 volt power to the power supply.







Sparkle Power Inc. Power SPI220LE Flex ATX & ATX12V Power Supply

Sparkle Power Inc. Power SPI220LE Flex ATX & ATX12V Power Supply


Sparkle Power Inc. SPI220LE Power SPI220LE Flex ATX & ATX12V Power Supply










Today you turn on the power with a little push button, and you turn off the machine with a menu option. These capabilities were added to standard power supplies several years ago. The operating system can send a signal to the power supply to tell it to turn off. The push button sends a 5-volt signal to the power supply to tell it when to turn on. The power supply also has a circuit that supplies 5 volts, called VSB for "standby voltage" even when it is officially "off", so that the button will work. See the next page to lea

===============================================================

Connectors
Various connectors from a computer PSU.

Typically, power supplies have the following connectors:

* PC Main power connector (usually called P1): Is the connector that goes to the motherboard to provide it with power. The connector has 20 or 24 pins. One of the pins belongs to the PS-ON wire (it is usually green). This connector is the largest of all the connectors. In older AT power supplies, this connector was split in two: P8 and P9. A power supply with a 24-pin connector can be used on a motherboard with a 20-pin connector. In cases where the motherboard has a 24-pin connector, some power supplies come with two connectors (one with 20-pin and other with 4-pin) which can be used together to form the 24-pin connector.
* ATX12V 4-pin power connector (also called the P4 power connector). A second connector that goes to the motherboard (in addition to the main 24-pin connector) to supply dedicated power for the processor. For high-end motherboards and processors, more power is required, therefore EPS12V has an 8 pin connector.







Sparkle Power Inc. Power MAGNA 1000 ATX12V & EPS12V Power Supply

Sparkle Power Inc. Power MAGNA 1000 ATX12V & EPS12V Power Supply


Sparkle Power Inc. R-SPI1000GCM Power MAGNA 1000 ATX12V & EPS12V Power Supply









* 4-pin Peripheral power connectors (usually called Molex for its manufacturer): These are the other, smaller connectors that go to the various disk drives of the computer. Most of them have four wires: two black, one red, and one yellow. Unlike the standard mains electrical wire color-coding, each black wire is a ground, the red wire is +5 V, and the yellow wire is +12 V. In some cases these are also used to provide additional power to PCI cards such as FireWire 800 cards.
* 4-pin Berg power connectors (usually called Mini-connector or "mini-Molex"): This is one of the smallest connectors that supplies the floppy drive with power. In some cases, it can be used as an auxiliary connector for AGP video cards. Its cable configuration is similar to the Peripheral connector.







Sparkle Power Inc. Power 350W ATX12V Power Supply

Sparkle Power Inc. Power 350W ATX12V Power Supply


Sparkle Power Inc. ATX-350PN-B204 Power 350W ATX12V Power Supply









* Auxiliary power connectors: There are several types of auxiliary connectors designed to provide additional power if it is needed.
* Serial ATA power connectors: a 15-pin connector for components which use SATA power plugs. This connector supplies power at three different voltages: +3.3, +5, and +12 volts.
* 6-pin Most modern computer power supplies include 6-pin connectors which are generally used for PCI Express graphics cards, but a newly introduced 8-pin connector should be seen on the latest model power supplies. Each PCI Express 6-pin connector can output a maximum of 75 W.
* 6+2 pin For the purpose of backwards compatibility, some connectors designed for use with PCI Express graphics cards feature this kind of pin configuration. It allows either a 6-pin card or an 8-pin card to be connected by using two separate connection modules wired into the same sheath: one with 6 pins and another with 2 pins.
* A C14 IEC connector with an appropriate C13 cord is used to attach the power supply to the local power grid.


Small facts to consider
Redundant power supply.

* Life span is usually measured in mean time between failures (MTBF). Higher MTBF ratings are preferable for longer device life and reliability. Quality construction consisting of industrial grade electrical components and/or a larger or higher speed fan can help to contribute to a higher MTBF rating by keeping critical components cool, thus preventing the unit from overheating. Overheating is a major cause of PSU failure. MTBF value of 100,000 hours is not uncommon.

* Power supplies may have passive or active power factor correction (PFC). Passive PFC is a simple way of increasing the power factor by putting a coil in series with the primary filter capacitors. Active PFC is more complex and can achieve higher PF, up to 99%.

* In computer power supplies that have more than one +12V power rail, it is preferable for stability reasons to spread the power load over the 12V rails evenly to help avoid overloading one of the rails on the power supply.
o Multiple 12V power supply rails are separately current limited as a safety feature; they are not generated separately. Despite widespread belief to the contrary, this separation has no effect on mutual interference between supply rails.
o The ATX12V 2.x and EPS12V power supply standards defer to the IEC 60950 standard, which requires that no more than 240 volt-amps be present between any two accessible points. Thus, each wire must be current-limited to no more than 20 A; typical supplies guarantee 18 A without triggering the current limit. Power supplies capable of delivering more than 18 A at 12 V connect wires in groups to two or more current sensors which will shut down the supply if excess current flows. Unlike a fuse or circuit breaker, these limits reset as soon as the overload is removed.
o Because of the above standards, almost all high-power supplies claim to implement separate rails, however this claim is often false; many omit the necessary current-limit circuitry,[5] both for cost reasons and because it is an irritation to customers.[1] (The lack is sometimes advertised as a feature under names like "rail fusion" or "current sharing".)

* When the computer is powered down but the power supply is still on, it can be started remotely via Wake-on-LAN and Wake-on-Ring or locally via Keyboard Power ON (KBPO) if the motherboard supports it.

* Early PSUs used a conventional (heavy) step-down transformer, but most modern computer power supplies are a type of switched-mode power supply (SMPS) with a ferrite-cored High Frequency transformer.

* Computer power supplies may have short circuit protection, overpower (overload) protection, overvoltage protection, undervoltage protection, overcurrent protection, and over temperature protection.

* Some power supplies come with sleeved cables, which is aesthetically nicer, makes wiring easier and cleaner and have less detrimental effect on airflow.






Sparkle Power Inc. Power 250W ATX12V Power Supply

Sparkle Power Inc. Power 250W ATX12V Power Supply


Sparkle Power Inc. FSP250-60SPV-B Power 250W ATX12V Power Supply











* There is a popular misconception that a greater power capacity (watt output capacity) is always better. Since supplies are self-certified, a manufacturer's claims may be double or more what is actually provided.[6][7] Although a too-large power supply will have an extra margin of safety as far as not over-loading, a larger unit is often less efficient at lower loads (under 20% of its total capability) and therefore will waste more electricity than a more appropriately sized unit. Additionally, computer power supplies generally do not function properly if they are too lightly loaded. Under no-load conditions they may shut down or malfunction.

* Another popular misconception is that the greater the total watt capacity is, the more suitable the power supply becomes for higher-end graphics cards. The most important factor for judging a PSUs suitability for certain graphics cards is the PSUs total 12V output, as it is that voltage on which modern graphics cards operate. If the total 12V output stated on the PSU is higher than the suggested minimum of the card, then that PSU can fully supply the card. It is however recommended that a PSU should not just cover the graphics cards' demands, as there are other components in the PC that depend on the 12V output, including the CPU and disk drives.

* Power supplies can feature magnetic amplifiers or double-forward converter circuit design.






1000 Watts EZ Plug Power Supply

1000 Watts EZ Plug Power Supply


Patented ATX 20+4 slide-in socket for PCIe Intel Pentium 4 multi-processor server AMD server and DUAL CORE main boards power need.















Zumax 400W ATX power supply Dual-Fan

Zumax 400W ATX power supply Dual-Fan


Extreme-reliable long-lifetime and compatible with Intel Pentium and AMD processors. Low ripple/noise ensures stable system performance ATX 2.0 Dual cooling fan 20+4-pin mainboard connector 4pin CP


















1200W 3U 2+1 Redundant Power Supply

1200W 3U 2+1 Redundant Power Supply


1200W 3U 2+1 RedundantPower Supply ATX and ATX12V Standard 24 Pin 101 (W) x 125 (H) x 300mm (L)


















12VDC to 120VAC 150W Bottle Mount AC Mobile Power Inverter/Converter

12VDC to 120VAC 150W Bottle Mount AC Mobile Power Inverter/Converter


Simply plug into your auto/boat cigarette lighter. Great for laptops DVD phones and more. Swivel mount. Converts 12v DC automobile power to standard 120v AC home power - up to 150 watts.


















EZ-Rack 3.5 SATA Hard Disk Mobile Rack

EZ-Rack 3.5 SATA Hard Disk Mobile Rack


Hot Swappable. long lasting NSS (Non-scratch SATA) connector for Upto 10 000 times insert/eject Lab House certified. Mystic Black Case with High-Efficient Cooling Fan.











Monday, June 14, 2010

power

POWER IS NOTHING WITHOUT CONTROL


Friday, May 28, 2010

eye







Phenom

Phenom


Phenom with AMBeR lens tint and GRAPHITE frame color - Finished to exacting specifications and machine polished with high velocity micro particulates, the engineering behind PHENOM is only outdone by one thing... its artistic style. Micro-engineered lenslocks add both flexibility and flair. An exercise in creative contrast.











Tuesday, May 25, 2010

syntax


syntax

In computer science, the syntax of a programming language is the set of rules that define the combinations of symbols that are considered to be correctly structured programs in that language. The syntax of a language defines its surface form.[1] Text-based programming languages are based on sequences of characters, while visual programming languages are based on the spatial layout and connections between symbols (which may be textual or graphical).

The lexical grammar of a textual language specifies how characters must be chunked into tokens. Other syntax rules specify the permissible sequences of these tokens and the process of assigning meaning to these token sequences is part of semantics.

The syntactic analysis of source code usually entails the transformation of the linear sequence of tokens into a hierarchical syntax tree (abstract syntax trees are one convenient form of syntax tree). This process is called parsing, as it is in syntactic analysis in linguistics. Tools have been written that automatically generate parsers from a specification of a language grammar written in Backus-Naur form, e.g., Yacc (yet another compiler compiler).


Syntax definition
Parse tree of Python code with inset tokenization

The syntax of textual programming languages is usually defined using a combination of regular expressions (for lexical structure) and Backus-Naur Form (for grammatical structure) to inductively specify syntactic categories (non terminals) and terminal symbols. Syntactic categories are defined by rules called productions, which specify the values that belong to a particular syntactic category.[1] Terminal symbols are the concrete characters or strings of characters (for example keywords such as define, if, let, or void) from which syntactically valid programs are constructed.

Below is a simple grammar, based on Lisp, which defines productions for the syntactic categories expression, atom, number, symbol, and list:

expression ::= atom | list
atom ::= number | symbol
number ::= [+-]?['0'-'9']+
symbol ::= ['A'-'Z''a'-'z'].*
list ::= '(' expression* ')'


This grammar specifies the following:

* an expression is either an atom or a list;
* an atom is either a number or a symbol;
* a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;
* a symbol is a letter followed by zero or more of any characters (excluding whitespace); and
* a list is a matched pair of parentheses, with zero or more expressions inside it.

Here the decimal digits, upper- and lower-case characters, and parentheses are terminal symbols.

The following are examples of well-formed token sequences in this grammar: '12345', '()', '(a b c232 (1))'

The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[2] However, there are exceptions. In some languages like Perl and Lisp the specification (or implementation) of the language allows constructs that execute during the parsing phase. Furthermore, these languages have constructs that allow the programmer to alter the behavior of the parser. This combination effectively blurs the distinction between parsing and execution, and makes syntax analysis an undecidable problem in these languages, meaning that the parsing phase may not finish. For example, in Perl it is possible to execute code during parsing using a BEGIN statement, and Perl function prototypes may alter the syntactic interpretation, and possibly even the syntactic validity of the remaining code.[3] Similarly, Lisp macros introduced by the defmacro syntax also execute during parsing, meaning that a Lisp compiler must have an entire Lisp run-time system present. In contrast C macros are merely string replacements, and do not require code execution.[4][5]

Syntax versus semantics

The syntax of a language describes the form of a valid program, but does not provide any information about the meaning of the program or the results of executing that program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.

Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:

* "Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning.
* "John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.

The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because p is a null pointer, the operations p->real and p->im have no meaning):

complex *p = NULL;
complex abs_p = sqrt (p->real * p->real + p->im * p->im);

Wednesday, May 19, 2010

raid

RAID

From Wikipedia, the free encyclopedia

RAID, an acronym for redundant array of independent disks or also known as redundant array of inexpensive disks, is a technology that allows high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy. This concept was first defined by David A. Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987 as redundant array of inexpensive disks.[1] Marketers representing industry RAID manufacturers later reinvented the term to describe a redundant array of independent disks as a means of dissociating a low-cost expectation from RAID technology.[2]

RAID is now used as an umbrella term for computer data storage schemes that can divide and replicate data among multiple hard disk drives. The different schemes/architectures are named by the word RAID followed by a number, as in RAID 0, RAID 1, etc. RAID's various designs involve two key design goals: increase data reliability and/or increase input/output performance. When multiple physical disks are set up to use RAID technology, they are said to be in a RAID array[3]. This array distributes data across multiple disks, but the array is seen by the computer user and operating system as one single disk. RAID can be set up to serve several different purposes.

[*] Principles

RAID combines two or more physical hard disks into a single logical unit using special hardware or software. Hardware solutions are often designed to present themselves to the attached system as a single hard drive, so that the operating system would be unaware of the technical workings. For example, if one were to configure a hardware-based RAID-5 volume using three 250 GB hard drives (two drives for data, and one for parity), the operating system would be presented with a single 500 GB volume. Software solutions are typically implemented in the operating system and would present the RAID volume as a single drive to applications running within the operating system.

There are three key concepts in RAID: mirroring, the writing of identical data to more than one disk; striping, the splitting of data across more than one disk; and error correction, where redundant parity data is stored to allow problems to be detected and possibly repaired (known as fault tolerance). Different RAID schemes use one or more of these techniques, depending on the system requirements. The purpose of using RAID is to improve reliability and availability of data, ensuring that important data is not harmed in case of hardware failure, and/or to increase the speed of file input/output.

Each RAID scheme affects reliability and performance in different ways. Every additional disk included in an array increases the likelihood that one will fail, but by using error checking and/or mirroring, the array as a whole can be made more reliable by the ability to survive and recover from a failure. Basic mirroring can speed up the reading of data, as a system can read different data from multiple disks at the same time, but it may be slow for writing if the configuration requires that all disks must confirm that the data is correctly written. Striping, often used for increasing performance, writes each bit to a different disk, allowing the data to be reconstructed from multiple disks faster than a single disk could send the same data. Error checking typically will slow down performance as data needs to be read from multiple places and then compared. The design of any RAID scheme is often a compromise in one or more respects, and understanding the requirements of a system is important. Modern disk arrays typically provide the facility to select an appropriate RAID configuration.

[*] Organization

Organizing disks into a redundant array decreases the usable storage capacity. For instance, a 2-disk RAID 1 array loses half of the total capacity that would have otherwise been available using both disks independently, and a RAID 5 array with several disks loses the capacity of one disk. Other types of RAID arrays are arranged, for example, so that they are faster to write to and read from than a single disk.

There are various combinations of these approaches giving different trade-offs of protection against data loss, capacity, and speed. RAID levels 0, 1, and 5 are the most commonly found, and cover most requirements.

[*] RAID 0

RAID 0 (striped disks) distributes data across multiple disks in ways that gives improved speed at any given instant. If one disk fails, however, all of the data on the array will be lost, as there is neither parity nor mirroring. In this regard, RAID 0 is a misnomer because RAID 0 is non-redundant. A RAID 0 array requires a minimum of two drives. A RAID 0 configuration can be applied to a single drive provided that the RAID controller is hardware and not software (i.e. OS-based arrays) and allows for such configuration. This allows a single drive to be added to a controller already containing another RAID configuration when the user does not wish to add the additional drive to the existing array. In this case, the controller would be set up as RAID only (as opposed to SCSI in non-RAID configuration), which requires that each individual drive be a part of some sort of RAID array.

[*] RAID 1

RAID 1 mirrors the contents of the disks, making a form of 1:1 ratio real time mirroring. The contents of each disk in the array are identical to that of every other disk in the array. A RAID 1 array requires a minimum of two drives.

[*] RAID 3, RAID 4

RAID 3 or 4 (striped disks with dedicated parity) combines three or more disks in a way that protects data against loss of any one disk. Fault tolerance is achieved by adding an extra disk to the array, which is dedicated to storing parity information; the overall capacity of the array is reduced by one disk. A RAID 3 or 4 array requires a minimum of three drives: two to hold striped data, and a third for parity. With the minimum three drives needed for RAID 3, the storage efficiency is 66 percent. With six drives, the storage efficiency is 83 percent.

[*] RAID 5

Striped set with distributed parity or interleave parity requiring 3 or more disks. Distributed parity requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of the entire set until the failed drive has been replaced and rebuilt.

[*] RAID 6

RAID 6 (striped disks with dual parity) combines four or more disks in a way that protects data against loss of any two disks. For example, if the goal is to create 10x1TB of usable space in a RAID 6 configuration, we need two additional disks for the parity data.

[*] RAID 10

RAID 1+0 (or 10) is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the "1+0" name. A RAID 1+0 array requires a minimum of four drives – two mirrored drives to hold half of the striped data, plus another two mirrored for the other half of the data. In Linux, MD RAID 10 is a non-nested RAID type like RAID 1 that only requires a minimum of two drives and may give read performance on the level of RAID 0.

[*] RAID 01

RAID 0+1 (or 01) is a striped data set (RAID 0) which is then mirrored (RAID 1). A RAID 0+1 array requires a minimum of four drives: two to hold the striped data, plus another two to mirror the first pair.

[*] Hardware/Software

RAID can involve significant computation when reading and writing information. With traditional "real" RAID hardware, a separate controller does this computation. In other cases the operating system or simpler and less expensive controllers require the host computer's processor to do the computing, which reduces the computer's performance on processor-intensive tasks (see Operating system based ("software RAID") and Firmware/driver-based RAID below). Simpler RAID controllers may provide only levels 0 and 1, which require less processing.

RAID systems with redundancy continue working without interruption when one (or possibly more, depending on the type of RAID) disks of the array fail, although they are then vulnerable to further failures. When the bad disk is replaced by a new one the array is rebuilt while the system continues to operate normally. Some systems have to be powered down when removing or adding a drive; others support hot swapping, allowing drives to be replaced without powering down. RAID with hot-swapping is often used in high availability systems, where it is important that the system remains running as much of the time as possible.

Note that a RAID controller itself can become the single point of failure within a system.

[*] Standard levels

Main article: Standard RAID levels

A number of standard schemes have evolved which are referred to as levels. There were five RAID levels originally conceived, but many more variations have evolved, notably several nested levels and many non-standard levels (mostly proprietary).

Following is a brief summary of the most commonly used RAID levels.[4] Space efficiency is given as amount of storage space available in an array of n disks, in multiples of the capacity of a single drive. For example if an array holds n=5 drives of 250GB and efficiency is n-1 then available space is 4 times 250GB or roughly 1TB.

Level

Description

Minimum # of disks

Space Efficiency

Fault Tolerance

RAID 0

Striped set without parity or mirroring. Provides improved performance and additional storage but no redundancy or fault tolerance. Because there is no redundancy, this level is not actually a Redundant Array of Independent Disks, i.e. not true RAID. However, because of the similarities to RAID (especially the need for a controller to distribute data across multiple disks), simple stripe sets are normally referred to as RAID 0. Any disk failure destroys the array, which has greater consequences with more disks in the array (at a minimum, catastrophic data loss is twice as severe compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 drive, the data is broken into fragments. The number of fragments is dictated by the number of disks in the array. The fragments are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking so any error is unrecoverable. More disks in the array means higher bandwidth, but greater risk of data loss.

2

n

0 (none)

RAID 1

Mirrored set without parity or striping. Provides fault tolerance from disk errors and failure of all but one of the drives. Increased read performance occurs when using a multi-threaded operating system that supports split seeks, as well as a very small performance reduction when writing. Array continues to operate so long as at least one drive is functioning. Using RAID 1 with a separate controller for each disk is sometimes called duplexing.

2

1 (size of the smallest disk)

n-1 disks

RAID 2

Hamming code parity. Disks are synchronized and striped in very small stripes, often in single bytes/words. Hamming codes error correction is calculated across corresponding bits on disks, and is stored on multiple parity disks.

3

RAID 3

Striped set with dedicated parity or bit interleaved parity or byte level parity.

This mechanism provides fault tolerance similar to RAID 5. However, because the stripe across the disks is much smaller than a filesystem block, reads and writes to the array perform like a single drive with a high linear write performance. For this to work properly, the drives must have synchronised rotation. If one drive fails, performance is not affected.

3

n-1

1 disk

RAID 4

Block level parity. Identical to RAID 3, but does block-level striping instead of byte-level striping. In this setup, files can be distributed between multiple disks. Each disk operates independently which allows I/O requests to be performed in parallel, though data transfer speeds can suffer due to the type of parity. The error detection is achieved through dedicated parity and is stored in a separate, single disk unit.

3

n-1

1 disk

RAID 5

Striped set with distributed parity or interleave parity. Distributed parity requires all drives but one to be present to operate; drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. A single drive failure in the set will result in reduced performance of the entire set until the failed drive has been replaced and rebuilt.

3

n-1

1 disk

RAID 6

Striped set with dual distributed parity. Provides fault tolerance from two drive failures; array continues to operate with up to two failed drives. This makes larger RAID groups more practical, especially for high availability systems. This becomes increasingly important because large-capacity drives lengthen the time needed to recover from the failure of a single drive. Single parity RAID levels are vulnerable to data loss until the failed drive is rebuilt: the larger the drive, the longer the rebuild will take. Dual parity gives time to rebuild the array without the data being at risk if a (single) additional drive fails before the rebuild is complete.

4

n-2

2 disks

[*] Nested (hybrid) RAID

Main article: Nested RAID levels

In what was originally termed hybrid RAID,[5] many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual disks or RAIDs themselves. Nesting more than two deep is unusual.

As there is no basic RAID level numbered larger than 9, nested RAIDs are usually unambiguously described by concatenating the numbers indicating the RAID levels, sometimes with a "+" in between. For example, RAID 10 (or RAID 1+0) consists of several level 1 arrays of physical drives, each of which is one of the "drives" of a level 0 array striped over the level 1 arrays. It is not called RAID 01, to avoid confusion with RAID 1, or indeed, RAID 01. When the top array is a RAID 0 (such as in RAID 10 and RAID 50) most vendors omit the "+", though RAID 5+0 is clearer.

The key difference from RAID 0+1 is that RAID 1+0 creates a striped set from a series of mirrored drives. In a failed disk situation, RAID 1+0 performs better because all the remaining disks continue to be used. The array can sustain multiple drive losses so long as no mirror loses all its drives.

[*] New RAID classification

In 1996, the RAID Advisory Board introduced an improved classification of RAID systems. It divides RAID into three types: Failure-resistant disk systems (that protect against data loss due to disk failure), failure-tolerant disk systems (that protect against loss of data access due to failure of any single component), and disaster-tolerant disk systems (that consist of two or more independent zones, either of which provides access to stored data).

The original "Berkeley" RAID classifications are still kept as an important historical reference point and also to recognize that RAID Levels 0-6 successfully define all known data mapping and protection schemes for disk. Unfortunately, the original classification caused some confusion due to assumption that higher RAID levels imply higher redundancy and performance. This confusion was exploited by RAID system manufacturers, and gave birth to the products with such names as RAID-7, RAID-10, RAID-30, RAID-S, etc. The new system describes the data availability characteristics of the RAID system rather than the details of its implementation.

The next list provides criteria for all three classes of RAID:

- Failure-resistant disk systems (FRDS) (meets a minimum of criteria 1 - 6):

1. Protection against data loss and loss of access to data due to disk drive failure
2. Reconstruction of failed drive content to a replacement drive
3. Protection against data loss due to a "write hole"
4. Protection against data loss due to host and host I/O bus failure
5. Protection against data loss due to replaceable unit failure
6. Replaceable unit monitoring and failure indication

- Failure-tolerant disk systems (FTDS) (meets a minimum of criteria 7 - 15 ):

7. Disk automatic swap and hot swap
8. Protection against data loss due to cache failure
9. Protection against data loss due to external power failure
10. Protection against data loss due to a temperature out of operating range
11. Replaceable unit and environmental failure warning
12. Protection against loss of access to data due to device channel failure
13. Protection against loss of access to data due to controller module failure
14. Protection against loss of access to data due to cache failure
15. Protection against loss of access to data due to power supply failure

- Disaster-tolerant disk systems (DTDS) (meets a minimum of criteria 16 - 21):

16. Protection against loss of access to data due to host and host I/O bus failure
17. Protection against loss of access to data due to external power failure
18. Protection against loss of access to data due to component replacement
19. Protection against loss of data and loss of access to data due to multiple disk failure
20. Protection against loss of access to data due to zone failure
21. Long-distance protection against loss of data due to zone failure

[*] Non-standard levels

Main article: Non-standard RAID levels

Many configurations other than the basic numbered RAID levels are possible, and many companies, organizations, and groups have created their own non-standard configurations, in many cases designed to meet the specialised needs of a small niche group. Most of these non-standard RAID levels are proprietary.

[*] Parity calculation; rebuilding failed drives

Parity data in a RAID environment is calculated using the Boolean XOR function. For example, here is a simple RAID 4 three-disk setup consisting of two drives that hold 8 bits of data each and a third drive that will be used to hold parity data.

Drive 1: 01101101
Drive 2: 11010100


To calculate parity data for the two drives, a XOR is performed on their data.
i.e. 01101101 XOR 11010100 = 10111001

The resulting parity data, 10111001, is then stored on Drive 3, the dedicated parity drive.

Should any of the three drives fail, the contents of the failed drive can be reconstructed on a replacement (or "hot spare") drive by subjecting the data from the remaining drives to the same XOR operation. If Drive 2 were to fail, its data could be rebuilt using the XOR results of the contents of the two remaining drives, Drive 3 and Drive 1:

Drive 3: 10111001
Drive 1: 01101101

i.e. 10111001 XOR 01101101 = 11010100

The result of that XOR calculation yields Drive 2's contents. 11010100 is then stored on Drive 2, fully repairing the array. This same XOR concept applies similarly to larger arrays, using any number of disks. In the case of a RAID 3 array of 12 drives, 11 drives participate in the XOR calculation shown above and yield a value that is then stored on the dedicated parity drive.

Another way of saying this is:

Basically, all it’s doing on the 3rd (recovery) drive is telling you whether the data on the first 2 drives is equal or not, 0 being yes, and 1 being no. Therefore, pretend that the second drive is missing. Drive 1 says 1, and Drive 3 says that Drive 1 and 2 are not the same value (1) so drive 2 must be a 0. If Drive 3 said they were the same value (0) then drive 2 must be a 0. That’s how it’s recalculated. It also tells you how the number of missing drives can’t exceed the number of recovery volumes. If drives 1 and 2 were both missing, and drive 3 said 0 (they’re the same) it still wouldn’t know if they’re both 0’s or if they’re both 1’s. If drive 3 said 1 (they’re different) you wouldn’t know whether drive 1 was 0 and drive 2 was 1, or if it was the other way around.

[*] RAID is not data backup

A RAID system used as a main drive is not a replacement for backing up data. Data may become damaged or destroyed without harm to the drive(s) on which they are stored. For example, some of the data may be overwritten by a system malfunction; a file may be damaged or deleted by user error or malice and not noticed for days or weeks. RAID can also be overwhelmed by catastrophic failure that exceeds its recovery capacity and, of course, the entire array is at risk of physical damage by fire, natural disaster, or human forces. RAID is also vulnerable to controller failure since it is not always possible to migrate a RAID to a new controller without data loss [9].

RAID drives can make excellent backup drives, when employed as backup devices to main storage, and particularly when located offsite from the main systems. However, the use of RAID as the main storage solution cannot replace backups.

[*] Implementations

(Specifically, the section comparing hardware / software raid)

The distribution of data across multiple drives can be managed either by dedicated hardware or by software. When done in software the software may be part of the operating system or it may be part of the firmware and drivers supplied with the card.

[*] Operating system based ("software RAID")

Software implementations are now provided by many operating systems. A software layer sits above the (generally block-based) disk device drivers and provides an abstraction layer between the logical drives (RAIDs) and physical drives. Most common levels are RAID 0 (striping across multiple drives for increased space and performance) and RAID 1 (mirroring two drives), followed by RAID 1+0, RAID 0+1, and RAID 5 (data striping with parity) are supported.

Software RAID has advantages and disadvantages compared to hardware RAID. The software must run on a host server attached to storage, and server's processor must dedicate processing time to run the RAID software. The additional processing capacity required for RAID 0 and RAID 1 is low, but parity-based arrays require more complex data processing during write or integrity-checking operations. As the rate of data processing increases with the number of disks in the array, so does the processing requirement. Furthermore all the buses between the processor and the disk controller must carry the extra data required by RAID which may cause congestion.

Over the history of hard disk drives, the increase in speed of commodity CPUs has been consistently greater than the increase in speed of hard disk drive throughput[18]. Thus, over-time for a given number of hard disk drives, the percentage of host CPU time required to saturate a given number of hard disk drives has been dropping. e.g. The Linux software md RAID subsystem is capable of calculating parity information at 6GB/s (100% usage of a single core on a 2.1 GHz Intel "Core2" CPU as of Linux v2.6.26). A three-drive RAID5 array using hard disks capable of sustaining a write of 100MB/s will require parity to be calculated at the rate of 200MB/s. This will require the resources of just over 3% of a single CPU core during write operations (parity does not need to be calculated for read operations on a RAID5 array, unless a drive has failed).

Software RAID implementations may employ more sophisticated algorithms than hardware RAID implementations (for instance with respect to disk scheduling and command queueing), and thus may be capable of increased performance.

Another concern with operating system-based RAID is the boot process. It can be difficult or impossible to set up the boot process such that it can fail over to another drive if the usual boot drive fails. Such systems can require manual intervention to make the machine bootable again after a failure. There are exceptions to this, such as the LILO bootloader for Linux, loader for FreeBSD[19] , and some configurations of the GRUB bootloader natively understand RAID-1 and can load a kernel. If the BIOS recognizes a broken first disk and refers bootstrapping to the next disk, such a system will come up without intervention, but the BIOS might or might not do that as intended. A hardware RAID controller typically has explicit programming to decide that a disk is broken and fall through to the next disk.

Hardware RAID controllers can also carry battery-powered cache memory. For data safety in modern systems the user of software RAID might need to turn the write-back cache on the disk off (but some drives have their own battery/capacitors on the write-back cache, a UPS, and/or implement atomicity in various ways, etc). Turning off the write cache has a performance penalty that can, depending on workload and how well supported command queuing in the disk system is, be significant. The battery backed cache on a RAID controller is one solution to have a safe write-back cache.

Finally operating system-based RAID usually uses formats specific to the operating system in question so it cannot generally be used for partitions that are shared between operating systems as part of a multi-boot setup. However, this allows RAID disks to be moved from one computer to a computer with an operating system or file system of the same type, which can be more difficult when using hardware RAID (e.g. #1: When one computer uses a hardware RAID controller from one manufacturer and another computer uses a controller from a different manufacturer, drives typically cannot be interchanged. e.g. #2: If the hardware controller 'dies' before the disks do, data may become unrecoverable unless a hardware controller of the same type is obtained, unlike with firmware-based or software-based RAID).

Most operating system-based implementations allow RAIDs to be created from partitions rather than entire physical drives. For instance, an administrator could divide an odd number of disks into two partitions per disk, mirror partitions across disks and stripe a volume across the mirrored partitions to emulate IBM's RAID 1E configuration. Using partitions in this way also allows mixing reliability levels on the same set of disks. For example, one could have a very robust RAID 1 partition for important files, and a less robust RAID 5 or RAID 0 partition for less important data. (Some BIOS-based controllers offer similar features, e.g. Intel Matrix RAID.) Using two partitions on the same drive in the same RAID is, however, dangerous. (e.g. #1: Having all partitions of a RAID-1 on the same drive will, obviously, make all the data inaccessible if the single drive fails. e.g. #2: In a RAID 5 array composed of four drives 250 + 250 + 250 + 500 GB, with the 500-GB drive split into two 250 GB partitions, a failure of this drive will remove two partitions from the array, causing all of the data held on it to be lost).

[*] Hardware-based

Hardware RAID controllers use different, proprietary disk layouts, so it is not usually possible to span controllers from different manufacturers. They do not require processor resources, the BIOS can boot from them, and tighter integration with the device driver may offer better error handling.

A hardware implementation of RAID requires at least a special-purpose RAID controller. On a desktop system this may be a PCI expansion card, PCI-e expansion card or built into the motherboard. Controllers supporting most types of drive may be used – IDE/ATA, SATA, SCSI, SSA, Fibre Channel, sometimes even a combination. The controller and disks may be in a stand-alone disk enclosure, rather than inside a computer. The enclosure may be directly attached to a computer, or connected via SAN. The controller hardware handles the management of the drives, and performs any parity calculations required by the chosen RAID level.

Most hardware implementations provide a read/write cache, which, depending on the I/O workload, will improve performance. In most systems the write cache is non-volatile (i.e. battery-protected), so pending writes are not lost on a power failure.

Hardware implementations provide guaranteed performance, add no overhead to the local CPU complex and can support many operating systems, as the controller simply presents a logical disk to the operating system.

Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while the system is running.

However, hardware RAID controllers are mostly slower than software RAID due to a dedicated CPU on the controller card, which isn't as fast as a real CPU in a computer/server. More expensive RAID controllers have faster CPUs. If you buy a hardware RAID controller, checkout the specs and look for throughput speed.

[*] Firmware/driver-based RAID ("FakeRAID")

Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows (as described above). Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage bootup the RAID is implemented by the firmware; when a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded the drivers take over.

These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit, not the RAID controller itself, thus introducing the aforementioned CPU overhead from which hardware controllers don't suffer. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID), as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards. Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known by some as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "HostRAID".

[*] Network-attached storage

Main article: Network-attached storage

While not directly associated with RAID, Network-attached storage (NAS) is an enclosure containing disk drives and the equipment necessary to make them available over a computer network, usually Ethernet. The enclosure is basically a dedicated computer in its own right, designed to operate over the network without screen or keyboard. It contains one or more disk drives; multiple drives may be configured as a RAID.

[*] Hot spares

Both hardware and software RAIDs with redundancy may support the use of hot spare drives, a drive physically installed in the array which is inactive until an active drive fails, when the system automatically replaces the failed drive with the spare, rebuilding the array with the spare drive included. This reduces the mean time to recovery (MTTR), though it doesn't eliminate it completely. Subsequent additional failure(s) in the same RAID redundancy group before the array is fully rebuilt can result in loss of the data; rebuilding can take several hours, especially on busy systems.

Rapid replacement of failed drives is important as the drives of an array will all have had the same amount of use, and may tend to fail at about the same time rather than randomly.[citation needed] RAID 6 without a spare uses the same number of drives as RAID 5 with a hot spare and protects data against simultaneous failure of up to two drives, but requires a more advanced RAID controller. Further, a hot spare can be shared by multiple RAID sets.

[*] Reliability terms

Failure rate

Two different kinds of failure rates are applicable to RAID systems. Logical failure is defined as the loss of a single drive and its rate is equal to the sum of individual drives' failure rates. System failure is defined as loss of data and its rate will depend on the type of RAID. For RAID 0 this is equal to the logical failure rate, as there is no redundancy. For other types of RAID, it will be less than the logical failure rate, potentially approaching zero, and its exact value will depend on the type of RAID, the number of drives employed, and the vigilance and alacrity of its human administrators.

Mean time to data loss (MTTDL)

In this context, the average time before a loss of data in a given array.[20]. Mean time to data loss of a given RAID may be higher or lower than that of its constituent hard drives, depending upon what type of RAID is employed. The referenced report assumes times to data loss are exponentially distributed. This means 63.2% of all data loss will occur between time 0 and the MTTDL.

Mean time to recovery (MTTR)

In arrays that include redundancy for reliability, this is the time following a failure to restore an array to its normal failure-tolerant mode of operation. This includes time to replace a failed disk mechanism as well as time to re-build the array (i.e. to replicate data for redundancy).

Unrecoverable bit error rate (UBE)

This is the rate at which a disk drive will be unable to recover data after application of cyclic redundancy check (CRC) codes and multiple retries.

Write cache reliability

Some RAID systems use RAM write cache to increase performance. A power failure can result in data loss unless this sort of disk buffer is supplemented with a battery to ensure that the buffer has enough time to write from RAM back to disk.

Atomic write failure

Also known by various terms such as torn writes, torn pages, incomplete writes, interrupted writes, non-transactional, etc.

[*] Problems with RAID

[*] Correlated failures

The theory behind the error correction in RAID assumes that failures of drives are independent. Given these assumptions it is possible to calculate how often they can fail and to arrange the array to make data loss arbitrarily improbable.

In practice, the drives are often the same ages, with similar wear. Since many drive failures are due to mechanical issues which are more likely on older drives, this violates those assumptions and failures are in fact statistically correlated. In practice then, the chances of a second failure before the first has been recovered is not nearly as unlikely as might be supposed, and data loss can, in practice, occur at significant rates.[21]

A common misconception is that "server-grade" drives fail less frequently than consumer-grade drives. Two independent studies, one by Carnegie Mellon University and the other by Google, have shown that the “grade” of the drive does not relate to failure rates.[22][23]

[*] Atomicity

This is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize transactional features. Database researcher Jim Gray wrote "Update in Place is a Poison Apple"[24] during the early days of relational database commercialization. However, this warning largely went unheeded and fell by the wayside upon the advent of RAID, which many software engineers mistook as solving all data storage integrity and reliability problems. Many software programs update a storage object "in-place"; that is, they write a new version of the object on to the same disk addresses as the old version of the object. While the software may also log some delta information elsewhere, it expects the storage to present "atomic write semantics," meaning that the write of the data either occurred in its entirety or did not occur at all.

However, very few storage systems provide support for atomic writes, and even fewer specify their rate of failure in providing this semantic. Note that during the act of writing an object, a RAID storage device will usually be writing all redundant copies of the object in parallel, although overlapped or staggered writes are more common when a single RAID processor is responsible for multiple drives. Hence an error that occurs during the process of writing may leave the redundant copies in different states, and furthermore may leave the copies in neither the old nor the new state. The little known failure mode is that delta logging relies on the original data being either in the old or the new state so as to enable backing out the logical change, yet few storage systems provide an atomic write semantic on a RAID disk.

While the battery-backed write cache may partially solve the problem, it is applicable only to a power failure scenario.

Since transactional support is not universally present in hardware RAID, many operating systems include transactional support to protect against data loss during an interrupted write. Novell Netware, starting with version 3.x, included a transaction tracking system. Microsoft introduced transaction tracking via the journaling feature in NTFS. Ext4 has journaling with checksums; ext3 has journaling without checksums but an "append-only" option, or ext3COW (Copy on Write). If the journal itself in a filesystem is corrupted though, this can be problematic. The journaling in NetApp WAFL file system gives atomicity by never updating the data in place, as does ZFS. An alternative method to journaling is soft updates, which are used in some BSD-derived system's implementation of UFS.

This can present as a sector read failure. Some RAID implementations protect against this failure mode by remapping the bad sector, using the redundant data to retrieve a good copy of the data, and rewriting that good data to the newly mapped replacement sector. The UBE (Unrecoverable Bit Error) rate is typically specified at 1 bit in 1015 for enterprise class disk drives (SCSI, FC, SAS) , and 1 bit in 1014 for desktop class disk drives (IDE/ATA/PATA, SATA). Increasing disk capacities and large RAID 5 redundancy groups have led to an increasing inability to successfully rebuild a RAID group after a disk failure because an unrecoverable sector is found on the remaining drives. Double protection schemes such as RAID 6 are attempting to address this issue, but suffer from a very high write penalty.

[*] Write cache reliability

The disk system can acknowledge the write operation as soon as the data is in the cache, not waiting for the data to be physically written. This typically occurs in old, non-journaled systems such as FAT32, or if the Linux/Unix "writeback" option is chosen without any protections like the "soft updates" option (to promote I/O speed whilst trading-away data reliability). A power outage or system hang such as a BSOD can mean a significant loss of any data queued in such a cache.

Often a battery is protecting the write cache, mostly solving the problem. If a write fails because of power failure, the controller may complete the pending writes as soon as restarted. This solution still has potential failure cases: the battery may have worn out, the power may be off for too long, the disks could be moved to another controller, the controller itself could fail. Some disk systems provide the capability of testing the battery periodically, however this leaves the system without a fully charged battery for several hours.

An additional concern about write cache reliability exists, specifically regarding devices equipped with a write-back cache—a caching system which reports the data as written as soon as it is written to cache, as opposed to the non-volatile medium.[25] The safer cache technique is write-through, which reports transactions as written when they are written to the non-volatile medium.

[*] Equipment compatibility

The disk formats on different RAID controllers are not necessarily compatible, so that it may not be possible to read a RAID on different hardware. Consequently a non-disk hardware failure may require using identical hardware to recover the data. Software RAID however, such as implemented in the Linux kernel, alleviates this concern, as the setup is not hardware dependent, but runs on ordinary disk controllers. Additionally, Software RAID1 disks (and some hardware RAID1 disks, for example Silicon Image 5744) can be read like normal disks, so no RAID system is required to retrieve the data. Data recovery firms typically have a very hard time recovering data from RAID drives, with the exception of RAID1 drives with conventional data structure.

[*] Data recovery in the event of a failed array

With larger disk capacities the odds of a disk failure during rebuild are not negligible. In that event the difficulty of extracting data from a failed array must be considered. Only RAID 1 stores all data on each disk. Although it may depend on the controller, some RAID 1 disks can be read as a single conventional disk. This means a dropped RAID 1 disk, although damaged, can often be reasonably easily recovered using a software recovery program or CHKDSK. If the damage is more severe, data can often be recovered by professional drive specialists. RAID5 and other striped or distributed arrays present much more formidable obstacles to data recovery in the event the array goes down.

[*] Drive error recovery algorithms

Many modern drives have internal error recovery algorithms that can take upwards of a minute to recover and re-map data that the drive fails to easily read. Many RAID controllers will drop a non-responsive drive in 8 seconds or so. This can cause the array to drop a good drive because it has not been given enough time to complete its internal error recovery procedure, leaving the rest of the array vulnerable. So-called enterprise class drives limit the error recovery time and prevent this problem, but desktop drives can be quite risky for this reason. A fix is known for Western Digital drives. A utility called WDTLER.exe can limit the error recovery time of a Western Digital desktop drive so that it will not be dropped from the array for this reason. The utility enables TLER (time limited error recovery) which limits the error recovery time to 7 seconds. <--"UPDATE" As of October 2009 Western Digital has locked out this feature in their desktop drives such as the Caviar Black. It is said that if you try to run the WDTLER program you may actually damage the firmware of the drive. "UPDATE"--> Western Digital enterprise class drives are shipped from the factory with TLER enabled to prevent being dropped from RAID arrays. Similar technologies are used by Seagate, Samsung, and Hitachi.

[*] Increasing recovery time

Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger capacity drives may take hours, if not days, to rebuild. The re-build time is also limited if the entire array is still in operation at reduced capacity.[26] Given a RAID array with only one disk of redundancy (RAIDs 3, 4, and 5), a second failure would cause complete failure of the array, as the mean time between failure (MTBF) is high.[27]

[*] Other problems and viruses

While RAID may protect against physical drive failure, the data is still exposed to operator, software, hardware and virus destruction. Many studies[28] cite operator fault as the most common source of malfunction, such as a server operator replacing the incorrect disk in a faulty RAID array, and disabling the system (even temporarily) in the process.[29] Most well-designed systems include separate backup systems that hold copies of the data, but don't allow much interaction with it. Most copy the data and remove the copy from the computer for safe storage.

[*] History

Norman Ken Ouchi at IBM was awarded a 1978 U.S. patent 4,092,732[30] titled "System for recovering data stored in failed memory unit." The claims for this patent describe what would later be termed RAID 5 with full stripe writes. This 1978 patent also mentions that disk mirroring or duplexing (what would later be termed RAID 1) and protection with dedicated parity (that would later be termed RAID 4) were prior art at that time.

The term RAID was first defined by David A. Patterson, Garth A. Gibson and Randy Katz at the University of California, Berkeley, in 1987. They studied the possibility of using two or more drives to appear as a single device to the host system and published a paper: "A Case for Redundant Arrays of Inexpensive Disks (RAID)" in June 1988 at the SIGMOD conference.[1]

This specification suggested a number of prototype RAID levels, or combinations of drives. Each had theoretical advantages and disadvantages. Over the years, different implementations of the RAID concept have appeared. Most differ substantially from the original idealized RAID levels, but the numbered names have remained. This can be confusing, since one implementation of RAID 5, for example, can differ substantially from another. RAID 3 and RAID 4 are often confused and even used interchangeably.

One of the early uses of RAID 0 and 1 was the Crosfield Electronics Studio 9500 page layout system based on the Python workstation. The Python workstation was a Crosfield managed international development using PERQ 3B electronics, benchMark Technology's Viper display system and Crosfield's own RAID and fibre-optic network controllers. RAID 0 was particularly important to these workstations as it dramatically speeded image manipulation for the pre-press markets. Volume production started in Peterborough, England in early 1987.

[*] Non-RAID drive architectures

Main article: Non-RAID drive architectures

Non-RAID drive architectures also exist, and are often referred to, similarly to RAID, by standard acronyms, several tongue-in-cheek. A single drive is referred to as a SLED (Single Large Expensive Drive), by contrast with RAID, while an array of drives without any additional control (accessed simply as independent drives) is referred to as a JBOD (Just a Bunch Of Disks). Simple concatenation is referred to a SPAN, or sometimes as JBOD, though this latter is proscribed in careful use, due to the alternative meaning just cited.