The distribution of data across multiple drives can be managed either by dedicated
computer hardware or by
software. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called "hardware-assisted software RAID"), or it may reside entirely within the hardware RAID controller.
Hardware-based Hardware RAID controllers can be configured through card
BIOS or
Option ROM before an
operating system is booted, and after the operating system is booted,
proprietary configuration utilities are available from the manufacturer of each controller. Unlike the
network interface controllers for
Ethernet, which can usually be configured and serviced entirely through the common operating system paradigms like
ifconfig in
Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring a
vendor lock-in, and contributing to reliability issues. For example, in
FreeBSD, in order to access the configuration of
Adaptec RAID controllers, users are required to enable
Linux compatibility layer, and use the Linux tooling from Adaptec, potentially compromising the stability, reliability and security of their setup, especially when taking the long-term view. Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management and
hot spare disk designations from within the operating system without having to reboot into card BIOS. For example, this was the approach taken by
OpenBSD in 2005 with its bio(4) pseudo-device and the
bioctl utility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including the
drive sensor) for health monitoring; this approach has subsequently been adopted and extended by
NetBSD in 2007 as well.
Software-based Software RAID implementations are provided by many modern
operating systems. Software RAID can be implemented as: • A layer that abstracts multiple devices, thereby providing a single
virtual device (such as
Linux kernel's
md and OpenBSD's softraid) • A more generic logical
volume manager (provided with most server-class operating systems such as
Veritas or
LVM) • A component of the file system (such as
ZFS,
Spectrum Scale or
Btrfs) • A layer that sits above any file system and provides parity protection to user data (such as RAID-F) Some advanced
file systems are designed to organize data across multiple storage devices directly, without needing the help of a third-party logical volume manager: •
ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6 (RAID-Z2) double-parity, and a triple-parity version (RAID-Z3) also referred to as RAID 7. As it always stripes over top-level vdevs, it supports equivalents of the 1+0, 5+0, and 6+0 nested RAID levels (as well as striped triple-parity sets) but not other nested combinations. ZFS is the native file system on
Solaris and
illumos, and is also available on FreeBSD and Linux. Open-source ZFS implementations are actively developed under the
OpenZFS umbrella project. •
Spectrum Scale, initially developed by IBM for media streaming and scalable analytics, supports
declustered RAID protection schemes up to n+3. A particularity is the dynamic rebuilding priority which runs with low impact in the background until a data chunk hits n+0 redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum Scale supports metro-distance RAID 1. •
Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development). •
XFS was originally designed to provide an integrated volume manager that supports concatenating, mirroring and striping of multiple physical storage devices. However, the implementation of XFS in Linux kernel lacks the integrated volume manager. Many operating systems provide RAID implementations, including the following: •
Hewlett-Packard's
OpenVMS operating system supports RAID 1. The mirrored disks, called a "shadow set", can be in different locations to assist in disaster recovery. • Apple's
macOS and
macOS Server natively support RAID 0, RAID 1, and RAID 1+0, which can be created with
Disk Utility or its
command-line interface, while RAID 4 and RAID 5 can only be created using the third-party software
SoftRAID by
OWC, with the driver for SoftRAID access natively included since
macOS 13.3. •
FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via
GEOM modules and ccd. •
Linux's
md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings. Certain reshaping/resizing/expanding operations are also supported. •
Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software implementations.
Logical Disk Manager, introduced with
Windows 2000, allows for the creation of RAID 0, RAID 1, and RAID 5 volumes by using
dynamic disks, but this was limited only to professional and server editions of Windows until the release of
Windows 8.
Windows XP can be modified to unlock support for RAID 0, 1, and 5.
Windows 8 and
Windows Server 2012 introduced a RAID-like feature known as
Storage Spaces, which also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis. These options are similar to RAID 1 and RAID 5, but are implemented at a higher abstraction level. •
NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named RAIDframe. •
OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid. If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then a
first-stage boot loader might not be sophisticated enough to attempt loading the
second-stage boot loader from the second drive as a fallback. The second-stage boot loader for FreeBSD is capable of loading a
kernel from such an array.
Firmware- and driver-based controller that provides RAID functionality through proprietary firmware and drivers Software-implemented RAID is not always compatible with the system's boot process, and it is generally impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive "RAID controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip, or the chipset built-in RAID function, with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system. An example is
Intel Rapid Storage Technology, implemented on many consumer-level motherboards. Because some minimal hardware support is involved, this implementation is also called "hardware-assisted software RAID", If RAID 5 is supported, the hardware may provide a hardware XOR accelerator. An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from failure (due to the firmware) during the boot process even before the operating system's drivers take over. == Integrity ==