- Create array:mdadm --create [/path/to/raid_device_file] --level [raid_level] --raid-devices [number_of_disks] [/path/to/disk_device_file]- Stop array:mdadm -S [/path/to/raid_device_file]- Mark disk as failed:mdadm [/path/to/raid_device_file] -f [/path/to/disk_device_file]- Remove disk:mdadm [/path/to/raid_device_file] -r [/path/to/disk_device_file]- Add disk to array:mdadm [/path/to/raid_device_file] -a [/path/to/disk_device_file]- Show RAID info:mdadm -D [/path/to/raid_device_file]
mdadm [mode] <raiddevice> [options] <component-devices>
RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or partitions thereof) to be combined into a single device to hold (for example) a single filesystem. Some RAID levels include redundancy and so can survive some degree of device failure.Linux Software RAID devices are implemented through the md (Multiple Devices) device driver.Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CONTAINER.MULTIPATH is not a Software RAID mechanism, but does involve multiple devices: each device is a path to one common physical storage device. New installations should not use md/multipath as it is not well supported and has no ongoing development. Use the Device Mapper based multipath-tools instead.FAULTY is also not true RAID, and it only involves one device. It provides a layer over a true device that can be used to inject faults.CONTAINER is different again. A CONTAINER is a collection of devices that are managed as a set. This is similar to the set of devices connected to a hardware RAID controller. The set of devices may contain a number of different RAID arrays each utilising some (or all) of the blocks from a number of the devices in the set. For example, two devices in a 5-device set might form a RAID1 using the whole devices. The remaining three might have a RAID5 over the first half of each device, and a RAID0 over the second half.With a CONTAINER, there is one set of metadata that describes all of the arrays in the container. So when mdadm creates a CONTAINER device, the device just represents the metadata. Other normal arrays (RAID1 etc) can be created inside the container.
mdadm has several major modes of operation: Assemble Assemble the components of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array.Build Build an array that doesnt have per-device metadata (superblocks). For these sorts of arrays, mdadm cannot differentiate between initial creation and subsequent assembly of an array. It also cannot perform any checks that appropriate components have been requested. Because of this, the Build mode should only be used together with a complete understanding of what you are doing.CreateCreate a new array with per-device metadata (superblocks). Appropriate metadata is written to each device, and then the array comprising those devices is activated. A resync process is started to make sure that the array is consistent (e.g. both sides of a mirror contain the same data) but the content of the device is left otherwise untouched. The array can be used as soon as it has been created. There is no need to wait for the initial resync to finish.Follow or Monitor Monitor one or more md devices and act on any state changes. This is only meaningful for RAID1, 4, 5, 6, 10 or multipath arrays, as only these have interesting state. RAID0 or Linear never have missing, spare, or failed drives, so there is nothing to monitor.Grow Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices and changing the number of active devices in Linear and RAID levels 0/1/4/5/6, changing the RAID level between 0, 1, 5, and 6, and between 0 and 10, changing the chunk size and layout for RAID 0,4,5,6, as well as adding or removing a write-intent bitmap.Incremental Assembly Add a single device to an appropriate array. If the addition of the device makes the array runnable, the array will be started. This provides a convenient interface to a hot-plug system. As each device is detected, mdadm has a chance to include it in some array as appropriate. Optionally, when the --fail flag is passed in we will remove the device from any active array instead of adding it.
If a CONTAINER is passed to mdadm in this mode, then any arrays within that container will be assembled and started.
-A, --assemble Assemble a pre-existing array.-B, --build Build a legacy array without superblocks.-C, --create Create a new array.-F, --follow, --monitor Select Monitor mode.-G, --grow Change the size or shape of an active array.-I, --incremental Add/remove a single device to/from an appropriate array, and possibly start the array.--auto-detect Request that the kernel starts any auto-detected arrays. This can only work if md is compiled into the kernel - not if it is a module. Arrays can be auto-detected by the kernel if all the components are in primary MS-DOS partitions with partition type FD, and all use v0.90 metadata. In-kernel autodetect is not recommended for new installations. Using mdadm to detect and assemble arrays - possibly in an initrd - is substantially more flexible and should be preferred.If a device is given before any options, or if the first option is --add, --fail, or --remove, then the MANAGE mode is assumed. Anything other than these will cause the Misc mode to be assumed.
-h, --help Display general help message or, after one of the above options, a mode-specific help message.--help-options Display more detailed help about command line parsing and some commonly used options.-V, --version Print version information for mdadm.-v, --verbose Be more verbose about what is happening. This can be used twice to be extra-verbose. The extra verbosity currently only affects --detail --scan and --examine --scan.-q, --quiet Avoid printing purely informative messages. With this, mdadm will be silent unless there is something really important to report.--offroot Set first character of argv to @ to indicate mdadm was launched from initrd/initramfs and should not be shutdown by systemd as part of the regular shutdown process. This option is normally only used by the systems initscripts. Please see here for more details on how systemd handled argv:http://www.freedesktop.org/wiki/Software/systemd/RootStorageDaemons-f, --force Be more forceful about certain operations. See the various modes for the exact meaning of this option in different contexts.-c, --config= Specify the config file. Default is to use /etc/mdadm.conf, or if that is missing then /etc/mdadm/mdadm.conf. If the config file given is partitions then nothing will be read, but mdadm will act as though the config file contained exactly DEVICE partitions containers and will read /proc/partitions to find a list of devices to scan, and /proc/mdstat to find a list of containers to examine. If the word none is given for the config file, then mdadm will act as though the config file were empty.-s, --scan Scan config file or /proc/mdstat for missing information. In general, this option gives mdadm permission to get any missing information (like component devices, array devices, array identities, and alert destination) from the configuration file (see previous option); one exception is MISC mode when using --detail or --stop, in which case --scan says to get a list of array devices from /proc/mdstat.-e, --metadata= Declare the style of RAID metadata (superblock) to be used. The default is 1.2 for --create, and to guess for other operations. The default can be overridden by setting the metadata value for the CREATE keyword in mdadm.conf.
Use the Industry Standard DDF (Disk Data Format) format defined by SNIA. When creating a DDF array a CONTAINER will be created, and normal arrays can be created in that container.
Use the Intel(R) Matrix Storage Manager metadata format. This creates a CONTAINER which is managed in a similar manner to DDF, and is supported by an option-rom on some platforms:
When creating an array, the homehost will be recorded in the metadata. For version-1 superblocks, it will be prefixed to the array name. For version-0.90 superblocks, part of the SHA1 hash of the hostname will be stored in the later half of the UUID.
When reporting information about an array, any array which is tagged for the given homehost will be reported as such.
When using Auto-Assemble, only arrays tagged for the given homehost will be allowed to use local names (i.e. not ending in _ followed by a digit string). See below under Auto Assembly.
This functionality is currently only provided by --detail and --monitor.
-n, --raid-devices= Specify the number of active devices in the array. This, plus the number of spare devices (see below) must equal the number of component-devices (including missing devices) that are listed on the command line for --create. Setting a value of 1 is probably a mistake and so requires that --force be specified first. A value of 1 will then be allowed for linear, multipath, RAID0 and RAID1. It is never allowed for RAID4, RAID5 or RAID6. This number can only be changed using --grow for RAID1, RAID4, RAID5 and RAID6 arrays, and only on kernels which provide the necessary support.-x, --spare-devices= Specify the number of spare (eXtra) devices in the initial array. Spares can also be added and removed later. The number of component devices listed on the command line must equal the number of RAID devices plus the number of spare devices.-z, --size= Amount (in Kibibytes) of space to use from each drive in RAID levels 1/4/5/6. This must be a multiple of the chunk size, and must leave about 128Kb of space at the end of the drive for the RAID superblock. If this is not specified (as it normally is not) the smallest drive (or partition) sets the size, though if there is a variance among the drives of greater than 1%, a warning is issued.
A suffix of M or G can be given to indicate Megabytes or Gigabytes respectively.
Sometimes a replacement drive can be a little smaller than the original drives though this should be minimised by IDEMA standards. Such a replacement drive will be rejected by md. To guard against this it can be useful to set the initial size slightly smaller than the smaller device with the aim that it will still be larger than any replacement.
This value can be set with --grow for RAID level 1/4/5/6 though CONTAINER based arrays such as those with IMSM metadata may not be able to support this. If the array was created with a size smaller than the currently active drives, the extra space can be accessed using --grow. The size can be given as max which means to choose the largest size that fits on all current drives.
Before reducing the size of the array (with --grow --size=) you should make sure that space isnt needed. If the device holds a filesystem, you would need to resize the filesystem to use less space.
After reducing the array size you should check that the data stored in the device is still available. If the device holds a filesystem, then an fsck of the filesystem is a minimum requirement. If there are problems the array can be made bigger again with no loss with another --grow --size= command.
This value cannot be used when creating a CONTAINER such as with DDF and IMSM metadata, though it perfectly valid when creating an array inside a container.
Setting the array-size causes the array to appear smaller to programs that access the data. This is particularly needed before reshaping an array so that it will be smaller. As the reshape is not reversible, but setting the size with --array-size is, it is required that the array size is reduced as appropriate before the number of devices in the array is reduced.
Before reducing the size of the array you should make sure that space isnt needed. If the device holds a filesystem, you would need to resize the filesystem to use less space.
After reducing the array size you should check that the data stored in the device is still available. If the device holds a filesystem, then an fsck of the filesystem is a minimum requirement. If there are problems the array can be made bigger again with no loss with another --grow --array-size= command.
A suffix of M or G can be given to indicate Megabytes or Gigabytes respectively. A value of max restores the apparent size of the array to be whatever the real amount of available space is.
RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power of 2. In any case it must be a multiple of 4KB.
When a CONTAINER metadata type is requested, only the container level is permitted, and it does not need to be explicitly given.
When used with --build, only linear, stripe, raid0, 0, raid1, multipath, mp, and faulty are valid.
Can be used with --grow to change the RAID level in some cases. See LEVEL CHANGES below.
The layout of the RAID5 parity block can be one of left-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left-symmetric.
It is also possible to cause RAID5 to use a RAID4-like layout by choosing parity-first, or parity-last.
Finally for RAID5 there are DDF-compatible layouts, ddf-zero-restart, ddf-N-restart, and ddf-N-continue.
These same layouts are available for RAID6. There are also 4 layouts that will provide an intermediate stage for converting between RAID5 and RAID6. These provide a layout which is identical to the corresponding RAID5 layout on the first N-1 devices, and has the Q syndrome (the second parity block used by RAID6) on the last device. These layouts are: left-symmetric-6, right-symmetric-6, left-asymmetric-6, right-asymmetric-6, and parity-first-6.
When setting the failure mode for level faulty, the options are: write-transient, wt, read-transient, rt, write-persistent, wp, read-persistent, rp, write-all, read-fixable, rf, clear, flush, none.
Each failure mode can be followed by a number, which is used as a period between fault generation. Without a number, the fault is generated once on the first relevant request. With a number, the fault will be generated after that many requests, and will continue to be generated every time the period elapses.
Multiple failure modes can be current simultaneously by using the --grow option to set subsequent failure modes.
clear or none will remove any pending or periodic failure modes, and flush will clear any persistent faults.
Finally, the layout options for RAID10 are one of n, o or f followed by a small number. The default is n2. The supported options are:
n signals near copies. Multiple copies of one data block are at similar offsets in different devices.
o signals offset copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, and are one chunk further down.
f signals far copies (multiple copies have very different offsets). See md(4) for more detail about near, offset, and far.
The number is the number of copies of each datablock. 2 is normal, 3 can be useful. This number can be at most equal to the number of devices in the array. It does not need to divide evenly into that number (e.g. it is perfectly legal to have an n2 layout for an array with an odd number of devices).
When an array is converted between RAID5 and RAID6 an intermediate RAID6 layout is used in which the second parity block (Q) is always on the last device. To convert a RAID5 to RAID6 and leave it in this new layout (which does not require re-striping) use --layout=preserve. This will try to avoid any restriping.
The converse of this is --layout=normalise which will change a non-standard RAID6 layout into a more standard arrangement.
To help catch typing errors, the filename must contain at least one slash (/) if it is a real file (not internal or none).
Note: external bitmaps are only known to work on ext2 and ext3. Storing bitmap files on other filesystems may result in serious problems.
When an array is resized to a larger size with --grow --size= the new space is normally resynced in that same way that the whole array is resynced at creation. From Linux version 3.0, --assume-clean can be used with that command to avoid the automatic resync.
The argument can also come immediately after -a. e.g. -ap.
If --auto is not given on the command line or in the config file, then the default will be --auto=yes.
If --scan is also given, then any auto= entries in the config file will override the --auto instruction given on the command line.
For partitionable arrays, mdadm will create the device file for the whole array and for the first 4 partitions. A different number of partitions can be specified at the end of this option (e.g. --auto=p7). If the device name ends with a digit, the partition names add a p, and a number, e.g. /dev/md/home1p3. If there is no trailing digit, then the partition names just have a number added, e.g. /dev/md/scratch3.
If the md device name is in a standard format as described in DEVICE NAMES, then it will be created, if necessary, with the appropriate device number based on that name. If the device name is not in one of these formats, then a unused device number will be allocated. The device number will be considered unused if there is no active array for that number, and there is no entry in /dev for that number and with a non-standard name. Names that are not in standard format are only allowed in /dev/md/.
This is meaningful with --create or --build.
If the target array is a Linear array, then --add can be used to add one or more devices to the array. They are simply catenated on to the end of the array. Once added, the devices cannot be removed.
If the --raid-disks option is being used to increase the number of devices in an array, then --add can be used to add some extra devices to be included in the array. In most cases this is not needed as the extra devices can be added as spares first, and then the number of raid-disks can be changed. However for RAID0, it is not possible to add spares. So to increase the number of devices in a RAID0, it is necessary to set the new number of devices, and to add the new devices, in the same command.
-u, --uuid= uuid of array to assemble. Devices which dont have this uuid are excluded-m, --super-minor= Minor number of device that array was created for. Devices which dont have this minor number are excluded. If you create an array as /dev/md1, then all superblocks will contain the minor number 1, even if the array is later assembled as /dev/md2.
Giving the literal word dev for --super-minor will cause mdadm to use the minor number of the md device that is being assembled. e.g. when assembling /dev/md0, --super-minor=dev will look for super blocks with a minor number of 0.
--super-minor is only relevant for v0.90 metadata, and should not normally be used. Using --uuid is much safer.
The sparc2.2 option will adjust the superblock of an array what was created on a Sparc machine running a patched 2.2 Linux kernel. This kernel got the alignment of part of the superblock wrong. You can use the --examine --sparc2.2 option to mdadm to see what effect this would have.
The super-minor option will update the preferred minor field on each superblock to match the minor number of the array being assembled. This can be useful if --examine reports a different Preferred Minor to --detail. In some cases this update will be performed automatically by the kernel driver. In particular the update happens automatically at the first write to an array with redundancy (RAID level 1 or greater) on a 2.6 (or later) kernel.
The uuid option will change the uuid of the array. If a UUID is given with the --uuid option that UUID will be used as a new UUID and will NOT be used to help identify the devices in the array. If no --uuid is given, a random UUID is chosen.
The name option will change the name of the array as stored in the superblock. This is only supported for version-1 superblocks.
The homehost option will change the homehost as recorded in the superblock. For version-0 superblocks, this is the same as updating the UUID. For version-1 superblocks, this involves updating the name.
The resync option will cause the array to be marked dirty meaning that any redundancy in the array (e.g. parity for RAID5, copies for RAID1) may be incorrect. This will cause the RAID system to perform a resync pass to make sure that all redundant information is correct.
The byteorder option allows arrays to be moved between machines with different byte-order. When assembling such an array for the first time after a move, giving --update=byteorder will cause mdadm to expect superblocks to have their byteorder reversed, and will correct that order before assembling the array. This is only valid with original (Version 0.90) superblocks.
The summaries option will correct the summaries in the superblock. That is the counts of total, working, active, failed, and spare devices.
The devicesize option will rarely be of use. It applies to version 1.1 and 1.2 metadata only (where the metadata is at the start of the device) and is only useful when the component device has changed size (typically become larger). The version 1 metadata records the amount of the device that can be used to store data, so if a device in a version 1.1 or 1.2 array becomes larger, the metadata will still be visible, but the extra space will not. In this case it might be useful to assemble the array with --update=devicesize. This will cause mdadm to determine the maximum usable amount of space on each device and update the relevant field in the metadata.
The no-bitmap option can be used when an array has an internal bitmap which is corrupt in some way so that assembling the array normally fails. It will cause any internal bitmap to be ignored.
Reshape can be continued later using the --continue option for the grow command.
-t, --test Unless a more serious error occurred, mdadm will exit with a status of 2 if no changes were made to the array and 0 if at least one change was made. This can be useful when an indirect specifier such as missing, detached or faulty is used in requesting an operation on the array. --test will report failure if these specifiers didnt find any match.-a, --add hot-add listed devices. If a device appears to have recently been part of the array (possibly it failed or was removed) the device is re-added as described in the next point. If that fails or the device was never part of the array, the device is added as a hot-spare. If the array is degraded, it will immediately start to rebuild data onto that spare.
Note that this and the following options are only meaningful on array with redundancy. They dont apply to RAID0 or Linear.
When used on an array that has no metadata (i.e. it was built with --build) it will be assumed that bitmap-based recovery is enough to make the device fully consistent with the array.
When --re-add can be accompanied by --update=devicesize. See the description of this option when used in Assemble mode for an explanation of its use.
If the device name given is missing then mdadm will try to find any device that looks like it should be part of the array but isnt and will try to re-add all such devices.
-Q, --query Examine a device to see (1) if it is an md device and (2) if it is a component of an md array. Information about what is discovered is presented.-D, --detail Print details of one or more md devices.--detail-platform Print details of the platforms RAID capabilities (firmware / hardware topology) for a given metadata format.-Y, --export When used with --detail or --examine, output will be formatted as key=value pairs for easy import into the environment.-E, --examine Print contents of the metadata stored on the named device(s). Note the contrast between --examine and --detail. --examine applies to devices which are components of an array, while --detail applies to a whole array which is currently active.--sparc2.2 If an array was created on a SPARC machine with a 2.2 Linux kernel patched with RAID support, the superblock will have been created incorrectly, or at least incompatibly with 2.4 and later kernels. Using the --sparc2.2 flag with --examine will fix the superblock before displaying it. If this appears to do the right thing, then the array can be successfully assembled using --assemble --update=sparc2.2.-X, --examine-bitmap Report information about a bitmap file. The argument is either an external bitmap file or an array component in case of an internal bitmap. Note that running this on an array device (e.g. /dev/md0) does not report the bitmap for that array.-R, --run start a partially assembled array. If --assemble did not find enough devices to fully start the array, it might leaving it partially assembled. If you wish, you can then use --run to start the array in degraded mode.-S, --stop deactivate array, releasing all resources.-o, --readonly mark array as readonly.-w, --readwrite mark array as readwrite.--zero-superblock If the device contains a valid md superblock, the block is overwritten with zeros. With --force the block where the superblock would be is overwritten even if it doesnt appear to be valid.--kill-subarray= If the device is a container and the argument to --kill-subarray specifies an inactive subarray in the container, then the subarray is deleted. Deleting all subarrays will leave an empty-container or spare superblock on the drives. See --zero-superblock for completely removing a superblock. Note that some formats depend on the subarray index for generating a UUID, this command will fail if it would change the UUID of an active subarray.--update-subarray= If the device is a container and the argument to --update-subarray specifies a subarray in the container, then attempt to update the given superblock field in the subarray. See below in MISC MODE for details.-t, --test When used with --detail, the exit status of mdadm is set to reflect the status of the device. See below in MISC MODE for details.-W, --wait For each md device given, wait for any resync, recovery, or reshape activity to finish before returning. mdadm will return with success if it actually waited for every device listed, otherwise it will return failure.--wait-clean For each md device given, or each device in /proc/mdstat if --scan is given, arrange for the array to be marked clean as soon as possible. mdadm will return with success if the array uses external metadata and we successfully waited. For native arrays this returns immediately as the kernel handles dirty-clean transitions at shutdown. No action is taken if safe-mode handling is disabled.
--rebuild-map, -r Rebuild the map file (/dev/md/md-device-map) that mdadm uses to help track which arrays are currently being assembled.--run, -R Run any array assembled as soon as a minimal number of devices are available, rather than waiting until all expected devices are present.--scan, -s Only meaningful with -R this will scan the map file for arrays that are being incrementally assembled and will try to start any that are not already started. If any such array is listed in mdadm.conf as requiring an external bitmap, that bitmap will be attached first.--fail, -f This allows the hot-plug system to remove devices that have fully disappeared from the kernel. It will first fail and then remove the device from any array it belongs to. The device name given should be a kernel device name such as sda, not a name in /dev.--path= Only used with --fail. The path given will be recorded so that if a new device appears at the same location it can be automatically added to the same array. This allows the failed device to be automatically replaced by a new device without metadata if it appears at specified path. This option is normally only set by a udev script.
-m, --mail Give a mail address to send alerts to.-p, --program, --alert Give a program to be run whenever an event is detected.-y, --syslog Cause all events to be reported through syslog. The messages have facility of daemon and varying priorities.-d, --delay Give a delay in seconds. mdadm polls the md arrays and then waits this many seconds before polling again. The default is 60 seconds. Since 2.6.16, there is no need to reduce this as the kernel alerts mdadm immediately when there is any change.-r, --increment Give a percentage increment. mdadm will generate RebuildNN events with the given percentage increment.-f, --daemonise Tell mdadm to run as a background daemon if it decides to monitor anything. This causes it to fork and run in the child, and to disconnect from the terminal. The process id of the child is written to stdout. This is useful with --scan which will only continue monitoring if a mail address or alert program is found in the config file.-i, --pid-file When mdadm is running in daemon mode, write the pid of the daemon process to the specified file, instead of printing it on standard output.-1, --oneshot Check arrays only once. This will generate NewArray events and more significantly DegradedArray and SparesMissing events. Runningmdadm --monitor --scan -1from a cron script will ensure regular notification of any degraded arrays.-t, --test Generate a TestMessage alert for every array found at startup. This alert gets mailed and passed to the alert program. This can be used for testing that alert message do get through successfully.--no-sharing This inhibits the functionality for moving spares between arrays. Only one monitoring process started with --scan but without this flag is allowed, otherwise the two could interfere with each other.
Usage: mdadm --assemble md-device options-and-component-devices... mdadm --assemble --scan mdadm --assemble --scan In the first usage example (without the --scan) the first device given is the md device. In the second usage example, all devices listed are treated as md devices and assembly is attempted. In the third (where no devices are listed) all md devices that are listed in the configuration file are assembled. If no arrays are described by the configuration file, then any arrays that can be found on unused devices will be assembled.If precisely one device is listed, but --scan is not given, then mdadm acts as though --scan was given and identity information is extracted from the configuration file.The identity can be given with the --uuid option, the --name option, or the --super-minor option, will be taken from the md-device record in the config file, or will be taken from the super block of the first component-device listed on the command line.Devices can be given on the --assemble command line or in the config file. Only devices which have an md superblock which contains the right identity will be considered for any array.The config file is only used if explicitly named with --config or requested with (a possibly implicit) --scan. In the later case, /etc/mdadm.conf or /etc/mdadm/mdadm.conf is used.If --scan is not given, then the config file will only be used to find the identity of md arrays.Normally the array will be started after it is assembled. However if --scan is not given and not all expected drives were listed, then the array is not started (to guard against usage errors). To insist that the array be started in this case (as may work for RAID1, 4, 5, 6, or 10), give the --run flag.If udev is active, mdadm does not create any entries in /dev but leaves that to udev. It does record information in /dev/md/md-device-map which will allow udev to choose the correct name.If mdadm detects that udev is not configured, it will create the devices in /dev itself.In Linux kernels prior to version 2.6.28 there were two distinctly different types of md devices that could be created: one that could be partitioned using standard partitioning tools and one that could not. Since 2.6.28 that distinction is no longer relevant as both type of devices can be partitioned. mdadm will normally create the type that originally could not be partitioned as it has a well defined major number (9).Prior to 2.6.28, it is important that mdadm chooses the correct type of array device to use. This can be controlled with the --auto option. In particular, a value of mdp or part or p tells mdadm to use a partitionable device rather than the default.In the no-udev case, the value given to --auto can be suffixed by a number. This tells mdadm to create that number of partition devices rather than the default of 4.The value given to --auto can also be given in the configuration file as a word starting auto= on the ARRAY line for the relevant array. Auto Assembly --assemble is used with --scan and no devices are listed, If no arrays are listed in the config (other than those marked <ignore>) it will look through the available devices for possible arrays and will try to assemble anything that it finds. Arrays which are tagged as belonging to the given homehost will be assembled and started normally. Arrays which do not obviously belong to this host are given names that are expected not to conflict with anything local, and are started read-auto so that nothing is written to any device until the array is written to. i.e. automatic resync etc is delayed.If mdadm finds a consistent set of devices that look like they should comprise an array, and if the superblock is tagged as belonging to the given home host, it will automatically choose a device name and try to assemble the array. If the array uses version-0.90 metadata, then the minor number as recorded in the superblock is used to create a name in /dev/md/ so for example /dev/md/3. If the array uses version-1 metadata, then the name from the superblock is used to similarly create a name in /dev/md/ (the name will have any host prefix stripped first).This behaviour can be modified by the AUTO line in the mdadm.conf configuration file. This line can indicate that specific metadata type should, or should not, be automatically assembled. If an array is found which is not listed in mdadm.conf and has a metadata format that is denied by the AUTO line, then it will not be assembled. The AUTO line can also request that all arrays identified as being for this homehost should be assembled regardless of their metadata type. See mdadm.conf(5) for further details.Note: Auto assembly cannot be used for assembling and activating some arrays which are undergoing reshape. In particular as the backup-file cannot be given, any reshape which requires a backup-file to continue cannot be started by auto assembly. An array which is growing to more devices and has passed the critical section can be assembled using auto-assembly.
Usage: mdadm --build md-device --chunk=X --level=Y --raid-devices=Z devices --create. The difference is that it creates an array without a superblock. With these arrays there is no difference between initially creating the array and subsequently assembling the array, except that hopefully there is useful data there in the second case. The level may raid0, linear, raid1, raid10, multipath, or faulty, or one of their synonyms. All devices must be listed and the array will be started once complete. It will often be appropriate to use --assume-clean with levels raid1 or raid10.
Usage: mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devicesThis usage will initialise a new md array, associate some devices with it, and activate the array. The named device will normally not exist when mdadm --create is run, but will be created by udev once the array becomes active.As devices are added, they are checked to see if they contain RAID superblocks or filesystems. They are also checked to see if the variance in device size exceeds 1%.If any discrepancy is found, the array will not automatically be run, though the presence of a --run can override this caution.To create a degraded array in which some devices are missing, simply give the word missing in place of a device name. This will cause mdadm to leave the corresponding slot in the array empty. For a RAID4 or RAID5 array at most one slot can be missing; for a RAID6 array at most two slots. For a RAID1 array, only one real device needs to be given. All of the others can be missing.When creating a RAID5 array, mdadm will automatically create a degraded array with an extra spare drive. This is because building the spare into a degraded array is in general faster than resyncing the parity on a non-degraded, but not clean, array. This feature can be overridden with the --force option.When creating an array with version-1 metadata a name for the array is required. If this is not given with the --name option, mdadm will choose a name based on the last component of the name of the device being created. So if /dev/md3 is being created, then the name 3 will be chosen. If /dev/md/home is being created, then the name home will be used.When creating a partition based array, using mdadm with version-1.x metadata, the partition type should be set to 0xDA (non fs-data). This type selection allows for greater precision since using any other [RAID auto-detect (0xFD) or a GNU/Linux partition (0x83)], might create problems in the event of array recovery through a live cdrom.A new array will normally get a randomly assigned 128bit UUID which is very likely to be unique. If you have a specific need, you can choose a UUID for the array by giving the --uuid= option. Be warned that creating two arrays with the same UUID is a recipe for disaster. Also, using --uuid= when creating a v0.90 array will silently override any --homehost= setting.When creating an array within a CONTAINER mdadm can be given either the list of devices to use, or simply the name of the container. The former case gives control over which devices in the container will be used for the array. The latter case allows mdadm to automatically choose which devices to use based on how much spare space is available.The General Management options that are valid with --create are:--run insist on running the array even if some devices look like they might be in use.--readonly start the array readonly - not supported yet.
Usage: mdadm device options... devices... mdadm /dev/md0 -f /dev/hda1 -r /dev/hda1 -a /dev/hda1 will firstly mark /dev/hda1 as faulty in /dev/md0 and will then remove it from the array and finally add it back in as a spare. However only one md array can be affected by a single command. When a device is added to an active array, mdadm checks to see if it has metadata on it which suggests that it was recently a member of the array. If it does, it tries to re-add the device. If there have been no changes since the device was removed, or if the array has a write-intent bitmap which has recorded whatever changes there were, then the device will immediately become a full member of the array and those differences recorded in the bitmap will be resolved.
Usage: mdadm options ... devices ... --query The device is examined to see if it is (1) an active md array, or (2) a component of an md array. The information discovered is reported.--detail The device should be an active md device. mdadm will display a detailed description of the array. --brief or --scan will cause the output to be less detailed and the format to be suitable for inclusion in mdadm.conf. The exit status of mdadm will normally be 0 unless mdadm failed to get useful information about the device(s); however, if the --test option is given, then the exit status will be:0
The array is functioning normally.
The array has at least one failed device.
The array has multiple failed devices such that it is unusable.
There was an error while trying to get information about the device.
metadata successfully enumerated its platform components on this system
metadata is platform independent
metadata failed to find its platform components on this system
The name option updates the subarray name in the metadata, it may not affect the device node name or the device node symlink until the subarray is re-assembled. If updating name would change the UUID of an active subarray this operation is blocked, and the command will end in an error.
Having --scan without listing any devices will cause all devices listed in the config file to be examined.
Usage: mdadm --monitor options... devices... As well as reporting events, mdadm may move a spare drive from one array to another if they are in the same spare-group or domain and if the destination array has a failed drive but no spares.If any devices are listed on the command line, mdadm will only monitor those devices. Otherwise all arrays listed in the configuration file will be monitored. Further, if --scan is given, then any other md devices that appear in /proc/mdstat will also be monitored.The result of monitoring the arrays is the generation of events. These events are passed to a separate program (if specified) and may be mailed to a given E-mail address.When passing events to a program, the program is run once for each event, and is given 2 or 3 command-line arguments: the first is the name of the event (see below), the second is the name of the md device which is affected, and the third is the name of a related device if relevant (such as a component device that has failed).If --scan is given, then a program or an E-mail address must be specified on the command line or in the config file. If neither are available, then mdadm will not monitor anything. Without --scan, mdadm will continue monitoring as long as something was found to monitor. If no program or email is given, then each event is reported to stdout.The different events are:DeviceDisappearedAn md array which previously was configured appears to no longer be configured. (syslog priority: Critical)
If mdadm was told to monitor an array which is RAID0 or Linear, then it will report DeviceDisappeared with the extra information Wrong-Level. This is because RAID0 and Linear do not support the device-failed, hot-spare and resync operations which are monitored.
An active component device of an array has been marked as faulty. (syslog priority: Critical)
The GROW mode is used for changing the size or shape of an active array. For this to work, the kernel must support the necessary change. Various types of growth are being added during 2.6 development.Currently the supported changes include • increase or decrease the raid-devices attribute of RAID0, RAID1, RAID4, RAID5, and RAID6.• change the chunk-size and layout of RAID0, RAID4, RAID5 and RAID6.• convert between RAID1 and RAID5, between RAID5 and RAID6, between RAID0, RAID4, and RAID5, and between RAID0 and RAID10 (in the near-2 mode).• add a write-intent bitmap to any array which supports these bitmaps, or remove a write-intent bitmap from such an array.Using GROW on containers is currently supported only for Intels IMSM container format. The number of devices in a container can be increased - which affects all arrays in the container - or an array in a container can be converted between levels where those levels are supported by the container, and the conversion is on of those listed above. Resizing arrays in an IMSM container with --grow --size is not yet supported. Grow functionality (e.g. expand a number of raid devices) for Intels IMSM container format has an experimental status. It is guarded by the MDADM_EXPERIMENTAL environment variable which must be set to 1 for a GROW command to succeed. This is for the following reasons:1. Intels native IMSM check-pointing is not fully tested yet. This can causes IMSM incompatibility during the grow process: an array which is growing cannot roam between Microsoft Windows(R) and Linux systems.2.Interrupting a grow operation is not recommended, because it has not been fully tested for Intels IMSM container format yet.Note: Intels native checkpointing doesnt use --backup-file option and it is transparent for assembly feature. SIZE CHANGES Note that when an array changes size, any filesystem that may be stored in the array will not automatically grow or shrink to use or vacate the space. The filesystem will need to be explicitly told to use the extra space after growing, or to reduce its size prior to shrinking the array.Also the size of an array cannot be changed while it has an active bitmap. If an array has a bitmap, it must be removed before the size can be changed. Once the change is complete a new bitmap can be created. RAID-DEVICES CHANGES When reducing the number of devices in a RAID1 array, the slots which are to be removed from the array must already be vacant. That is, the devices which were in those slots must be failed and removed.When the number of devices is increased, any hot spares that are present will be activated immediately.Changing the number of active devices in a RAID5 or RAID6 is much more effort. Every block in the array will need to be read and written back to a new location. From 2.6.17, the Linux Kernel is able to increase the number of devices in a RAID5 safely, including restarting an interrupted reshape. From 2.6.31, the Linux Kernel is able to increase or decrease the number of devices in a RAID5 or RAID6.From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4 or RAID5. mdadm uses this functionality and the ability to add devices to a RAID4 to allow devices to be added to a RAID0. When requested to do this, mdadm will convert the RAID0 to a RAID4, add the necessary disks and make the reshape happen, and then convert the RAID4 back to RAID0.When decreasing the number of devices, the size of the array will also decrease. If there was data in the array, it could get destroyed and this is not reversible, so you should firstly shrink the filesystem on the array to fit within the new size. To help prevent accidents, mdadm requires that the size of the array be decreased first with mdadm --grow --array-size. This is a reversible change which simply makes the end of the array inaccessible. The integrity of any data can then be checked before the non-reversible reduction in the number of devices is request.When relocating the first few stripes on a RAID5 or RAID6, it is not possible to keep the data on disk completely consistent and crash-proof. To provide the required safety, mdadm disables writes to the array while this critical section is reshaped, and takes a backup of the data that is in that section. For grows, this backup may be stored in any spare devices that the array has, however it can also be stored in a separate file specified with the --backup-file option, and is required to be specified for shrinks, RAID level changes and layout changes. If this option is used, and the system does crash during the critical period, the same file must be passed to --assemble to restore the backup and reassemble the array. When shrinking rather than growing the array, the reshape is done from the end towards the beginning, so the critical section is at the end of the reshape. LEVEL CHANGES --backup-file is required. If the array is not simultaneously being grown or shrunk, so that the array size will remain the same - for example, reshaping a 3-drive RAID5 into a 4-drive RAID6 - the backup file will be used not just for a cricital section but throughout the reshape operation, as described below under LAYOUT CHANGES. CHUNK-SIZE AND LAYOUT CHANGES --backup-file must be provided for these changes. Small sections of the array will be copied to the backup file while they are being rearranged. This means that all the data is copied twice, once to the backup and once to the new layout on the array, so this type of reshape will go very slowly. If the reshape is interrupted for any reason, this backup file must be made available to mdadm --assemble so the array can be reassembled. Consequently the file cannot be stored on the device being reshaped. BITMAP CHANGES
Usage: mdadm --incremental [--run] [--quiet] component-device mdadm --incremental --fail mdadm --incremental --rebuild-map mdadm --incremental --run --scan mdadm --incremental to be conditionally added to an appropriate array. Conversely, it can also be used with the --fail flag to do just the opposite and find whatever array a particular device is part of and remove the device from that array.If the device passed is a CONTAINER device created by a previous call to mdadm, then rather than trying to add that device to an array, all the arrays described by the metadata of the container will be started.mdadm performs a number of tests to determine if the device is part of an array, and which array it should be part of. If an appropriate array is found, or can be created, mdadm adds the device to the array and conditionally starts the array.Note that mdadm will normally only add devices to an array which were previously working (active or spare) parts of that array. The support for automatic inclusion of a new drive as a spare in some array requires a configuration through POLICY in config file.The tests that mdadm makes are as follow:+ Is the device permitted by mdadm.conf? That is, is it listed in a DEVICES line in that file. If DEVICES is absent then the default it to allow any device. Similar if DEVICES contains the special word partitions then any device is allowed. Otherwise the device name given to mdadm must match one of the names or patterns in a DEVICES line.+Does the device have a valid md superblock? If a specific metadata version is requested with --metadata or -e then only that style of metadata is accepted, otherwise mdadm finds any known version of metadata. If no md metadata is found, the device may be still added to an array as a spare if POLICY allows.mdadm/dev/md/md-device-map. If no array exists which matches the metadata on the new device, mdadm.conf or any name information stored in the metadata. If this name suggests a unit number, that number will be used, otherwise a free unit number will be chosen. Normally CREATE line in mdadm.conf suggests that a non-partitionable array is preferred, that will be honoured. If the array is not found in the config file and its metadata does not identify it as belonging to the homehost, then mdadm will choose a name for the array which is certain not to conflict with any array which does belong to this host. It does this be adding an underscore and a small number to the name preferred by the metadata.Once an appropriate array is found or created and the device is added, mdadm must decide if the array is ready to be started. It will normally compare the number of available (non-spare) devices to the number of devices that the metadata suggests need to be active. If there are at least that many, the array will be started. This means that if any devices are missing the array will not be restarted.As an alternative, --run may be passed to mdadm in which case the array will be run as soon as there are enough devices present for the data to be accessible. For a RAID1, that means one device will start the array. For a clean RAID5, the array will be started as soon as all but one drive is present.Note that neither of these approaches is really ideal. If it can be known that all device discovery has completed, then mdadm -IRs can be run which will try to start all arrays that are being incrementally assembled. They are started in read-auto mode in which they are read-only until the first write request. This means that no metadata updates are made and no attempt at resync or recovery happens. Further devices that are found before the first write can still be added safely.
This section describes environment variables that affect how mdadm operates. MDADM_NO_MDMON Setting this value to 1 will prevent mdadm from automatically launching mdmon. This variable is intended primarily for debugging mdadm/mdmon.MDADM_NO_UDEV Normally, mdadm does not create any device nodes in /dev, but leaves that task to udev. If udev appears not to be configured, or if this environment variable is set to 1, the mdadm will create and devices that are needed.
mdadm --query /dev/name-of-device This will find out if a given device is a RAID array, or is part of one, and will provide brief information about the device.mdadm --assemble --scan This will assemble and start all arrays listed in the standard config file. This command will typically go in a system startup file.mdadm --stop --scan This will shut down all arrays that can be shut down (i.e. are not currently in use). This will typically go in a system shutdown script.mdadm --follow --scan --delay=120 If (and only if) there is an Email address or program given in the standard config file, then monitor the status of all arrays listed in that file by polling them ever 2 minutes.mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hd[ac]1 Create /dev/md0 as a RAID1 array consisting of /dev/hda1 and /dev/hdc1.echo DEVICE /dev/hd*[0-9] /dev/sd*[0-9] > mdadm.conf mdadm --detail --scan >> mdadm.conf This will create a prototype config file that describes currently active arrays that are known to be made from partitions of IDE or SCSI drives. This file should be reviewed before being used as it may contain unwanted detail.echo DEVICE /dev/hd[a-z] /dev/sd*[a-z] > mdadm.conf mdadm --examine --scan --config=mdadm.conf >> mdadm.conf This will find arrays which could be assembled from existing IDE and SCSI whole drives (not partitions), and store the information in the format of a config file. This file is very likely to contain unwanted detail, particularly the devices= entries. It should be reviewed and edited before being used as an actual config file.mdadm --examine --brief --scan --config=partitions mdadm -Ebsc partitions Create a list of devices by reading /proc/partitions, scan these for RAID superblocks, and printout a brief listing of all that were found.mdadm -Ac partitions -m 0 /dev/md0 Scan all partitions and devices listed in /proc/partitions and assemble /dev/md0 out of all such devices with a RAID superblock with a minor number of 0.mdadm --monitor --scan --daemonise > /run/mdadm/mon.pid If config file contains a mail address or alert program, run mdadm in the background in monitor mode monitoring all md devices. Also write pid of mdadm daemon to /run/mdadm/mon.pid.mdadm -Iq /dev/somedevice Try to incorporate newly discovered device into some array as appropriate.mdadm --incremental --rebuild-map --run --scan Rebuild the array map from any current arrays, and then start any that can be started.mdadm /dev/md4 --fail detached --remove detached Any devices which are components of /dev/md4 will be marked as faulty and then remove from the array.mdadm --grow /dev/md4 --level=6 --backup-file=/root/backup-md4 The array /dev/md4 which is currently a RAID5 array will be converted to RAID6. There should normally already be a spare drive attached to the array as a RAID6 needs one more drive than a matching RAID5.mdadm --create /dev/md/ddf --metadata=ddf --raid-disks 6 /dev/sd[a-f] Create a DDF array over 6 devices.mdadm --create /dev/md/home -n3 -l5 -z 30000000 /dev/md/ddf Create a RAID5 array over any 3 devices in the given DDF set. Use only 30 gigabytes of each device.mdadm -A /dev/md/ddf1 /dev/sd[a-f] Assemble a pre-exist ddf array.mdadm -I /dev/md/ddf1 Assemble all arrays contained in the ddf array, assigning names as appropriate.mdadm --create --help Provide help about the Create mode.mdadm --config --help Provide help about the format of the config file.mdadm --help Provide general help.
/proc/mdstat /proc filesystem, /proc/mdstat lists all active md devices with information about them. --scan is given in Misc mode, and to monitor array reconstruction on Monitor mode. /etc/mdadm.conf mdadm.conf(5) for more details. /dev/md/md-device-map --incremental mode is used, this file gets a list of arrays currently being created.
mdadm understand two sorts of names for array devices.The first is the so-called standard format name, which matches the names used by the kernel and which appear in /proc/mdstat.The second sort can be freely chosen, but must reside in /dev/md/. When giving a device name to mdadm to create or assemble an array, either full path name such as /dev/md0 or /dev/md/home can be given, or just the suffix of the second sort of name, such as home can be given.When mdadm chooses device names during auto-assembly or incremental assembly, it will sometimes add a small sequence number to the end of the name to avoid conflicted between multiple arrays that have the same name. If mdadm can reasonably determine that the array really is meant for this host, either by a hostname in the metadata, or by the presence of the array in mdadm.conf, then it will leave off the suffix if possible. Also if the homehost is specified as <ignore> mdadm will only use a suffix if a different array of the same name already exists or is listed in the config file.The standard names for non-partitioned arrays (the only sort of md array available in 2.4 and earlier) are of the form /dev/mdNNwhere NN is a number. The standard names for partitionable arrays (as available from 2.6 onwards) are of the form /dev/md_dNNPartition numbers should be indicated by added pMM to these, thus /dev/md/d1p2. From kernel version, 2.6.28 the non-partitioned array can actually be partitioned. So the md_dNN names are no longer needed, and partitions such as /dev/mdNNpXX are possible.
mdadm was previously known as mdctl.mdadm is completely separate from the raidtools package, and does not use the /etc/raidtab configuration file at all.
For further information on mdadm usage, MD and the various levels of RAID, see: http://raid.wiki.kernel.org/(based upon Jakob Østergaards Software-RAID.HOWTO) The latest version of mdadm should always be available fromhttp://www.kernel.org/pub/linux/utils/raid/mdadm/Related man pages: mdmon(8), mdadm.conf(5), md(4).raidtab(5), raid0run(8), raidstop(8), mkraid(8).
mdadm_selinux(8), mdassemble(8), pvcreate(8)