Linux中软件RAID的使用

在**++Linux++系统中目前以MD (Multiple Devices)虚拟块设备的方式实现软件RAID,利用多个底层的块设备虚拟出一个新的虚拟块设备,并且利用条带化(stripping)++技术++将数据块均匀分布到多个磁盘上来提高虚拟设备的读写性能,利用不同的数据冗余算法来保护用户数据不会因为某个块设备的故障而完全丢失,而且还能在设备被替换后将丢失的数据恢复到新的设备上。关于不同冗余级别的定义和数据块以及校验块的分布示意图可以参考存储专业委员会给出的参考资料"Common RAID Disk++Data++** Format Specification "。目前MD支持linear, multipath, raid0 (stripping), raid1 (mirror), raid4, raid5, raid6, raid10等不同的冗余级别和组成方式,当然也能支持多个RAID阵列的层叠组成raid1+0, raid5+1等类型的阵列。在参考资料"Software RAID HOWTO"中介绍了早期软件RAID阵列功能特点和使用方式,但是因为软件RAID程序的功能不断增加,因此很有必要写份新的使用介绍。

本文主要先讲解用户层mdadm如何**++管理++**软件RAID以及使用中经常遇到的问题和解决方法。在流行的Linux的发布版中,如FedoraCore,Gentoo, Ubuntu,Debian,SuseLinux系统中一般已经将MD驱动模块直接编译到内核中或编译为可动态加载的驱动模块。我们可以在机器启动后通过cat /proc/mdstat看内核是否已经加载MD驱动或者cat /proc/devices是否有md块设备,并且可以使用lsmod看MD是否是以模块形式加载到系统中。

|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : unused devices: <none> [root@fc5 mdadm-2.6.3]# cat /proc/devices | grep md 1 ramdisk 9 md 253 mdp [root@fc5 mdadm-2.6.3]# lsmod | grep md md_mod 73364 0 |

如果Linux系统既没有将MD编译到内核也没有自动加载MD模块,则没有/proc/mdstat文件,那么需要执行命令modprobe md加载驱动模块。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]#cat /proc/mdstat cat: /proc/mdstat: No such file or directory [root@fc5 mdadm-2.6.3]# modprobe md [root@fc5 mdadm-2.6.3]# lsmod | grep md md_mod 73364 0 |

如果系统中没有MD驱动模块则需要自己从Linux 内核 ++源代码++ 网站 下载源代码包,并且重新编译内核,并且需要在内核的配置文件选择。

|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [*]Multiple devices driver support (RAID and LVM) <*> RAID support <M> Linear (append) mode <M> RAID-0 (striping) mode <M> RAID-1 (mirroring) mode <M> RAID-10 (mirrored striping) mode (EXPERIMENTAL) <M> RAID-4/RAID-5/RAID-6 mode [*] Support adding drives to a raid-5 array <M> Multipath I/O support <M> Faulty test module for MD |

在Linux系统中用户层以前使用raidtool工具集来管理MD设备,目前广泛使用mdadm软件来管理MD设备,而且该**++软件++都会集成在Linux的发布版中。如果系统中没有++安装++可以到RAID驱动程序和mdadm软件的维护者** Neil Brown 的个人网站 来下载**++源码++**包进行编译安装,或者下载RPM包直接安装。mdadm的最新版本是2.6.3。可以使用mdadm --version查看当前系统使用的版本。本文使用的是mdadm-2.6.3, Linux内核版本是Linux-2.6.22.1。下面的测试命令是在虚拟机环境中测试运行的。

|---------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# uname -r 2.6.22.1 [root@fc5 mdadm-2.6.3]# ./mdadm --version mdadm - v2.6.3 - 20th August 2007 |

++二++++. mdadm++ ++管理软++ ++RAID++ ++阵列++

mdadm程序是一个独立的程序,能完成所有的软RAID管理功能,主要有7种使用模式:

|--------------|----------------------------------------------|
| 模式名字 | 主要功能 |
| Create | 使用空闲的设备创建一个新的阵列,每个设备具有元数据块 |
| Assemble | 将原来属于一个阵列的每个块设备组装为阵列 |
| Build | 创建或组装不需要元数据的阵列,每个设备没有元数据块 |
| Manage | 管理已经存储阵列中的设备,比如增加热备磁盘或者设置某个磁盘失效,然后从阵列中删除这个磁盘 |
| Misc | 报告或者修改阵列中相关设备的信息,比如查询阵列或者设备的状态信息 |
| Grow | 改变阵列中每个设备被使用的容量或阵列中的设备的数目 |
| Monitor | 监控一个或多个阵列,上报指定的事件 |

++2.1++++为磁盘划分分区++

如果MD驱动被编译到内核中,当内核调用执行MD驱动时会自动查找分区为FD(Linux raid autodetect)格式的磁盘。所以一般会使用fdisk工具将HD磁盘或者SD磁盘分区,再设置为FD的磁盘。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# fdisk /dev/sdk Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help):nCommand action e extended p primary partition (1-4)p Partition number (1-4):1 First cylinder (1-512, default 1):1Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-512, default 512):512Using default value 512 Command (m for help):tSelected partition 1 Hex code (type L to list codes):FDChanged system type of partition 1 to fd (Linux raid autodetect) Command (m for help):wThe partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@fc5 mdadm-2.6.3]# fdisk -l /dev/sdk Disk /dev/sdk: 1073 MB, 1073741824 bytes 128 heads, 32 sectors/track, 512 cylinders Units = cylinders of 4096 * 512 = 2097152 bytes Device Boot Start End Blocks Id System /dev/sdk1 1 512 1048560 fd Linux raid autodetect |

软RAID阵列实际上也可以使用任何标准的块设备作为底层设备,如SCSI设备、IDE设备、RAM disk磁盘和NBD(Network Block Device)等,甚至是其他的MD设备。

如果MD驱动是模块形式加载,需要在系统运行时由用户层脚本控制RAID阵列启动运行。如在FedoraCore系统中在/etc/rc.d/rc.sysinit文件中有启动软RAID阵列的指令,若RAID的配置文件mdadm.conf存在,则调用mdadm检查配置文件里的选项,然后启动RAID阵列。

|----------------------------------------------------------------------------------------------------|
| echo "raidautorun /dev/md0" | nash --quiet if [ -f /etc/mdadm.conf ]; then /sbin/mdadm -A -s fi |

++2.2++++创建新的阵列++

mdadm使用--create(或其缩写-C)参数来创建新的阵列,并且将一些重要阵列的标识信息作为元数据可以写在每一个底层设备的指定区间。--level(或者其缩写-l)表示阵列的RAID级别,--chunk(或者其缩写-c)表示每个条带单元的大小,以KB为单位,默认为64KB,条带单元的大小配置对不同负载下的阵列读写性能有很大影响。--raid-devices(或者其缩写-n)表示阵列中活跃的设备个数,而--spare-devices(或者其缩写-x)表示阵列中热备盘的个数,一旦阵列中的某个磁盘失效,MD内核驱动程序自动用将热备磁盘加入到阵列,然后重构丢失磁盘上的数据到热备磁盘上。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 创建一个RAID 0设备: mdadm --create /dev/md0 --level=0 --chunk=32 --raid-devices=3 /dev/sd[i-k]1 创建一个RAID 1设备: mdadm -C /dev/md0 -l1 -c128 -n2 -x1 /dev/sd[i-k]1 创建一个RAID 5设备: mdadm -C /dev/md0 -l5 -n5 /dev/sd[c-g] -x1 /dev/sdb 创建一个RAID 6设备: mdadm -C /dev/md0 -l6 -n5 /dev/sd[c-g] -x2 /dev/sdb /dev/sdh 创建一个RAID 10设备: mdadm -C /dev/md0 -l10 -n6 /dev/sd[b-g] -x1 /dev/sdh 创建一个RAID1+0设备: mdadm -C /dev/md0 -l1 -n2 /dev/sdb /dev/sdc mdadm -C /dev/md1 -l1 -n2 /dev/sdd /dev/sde mdadm -C /dev/md2 -l1 -n2 /dev/sdf /dev/sdg mdadm -C /dev/md3 -l0 -n3 /dev/md0 /dev/md1 /dev/md2 |

当RAID1/4/5/6/10等创建成功后,需要计算每个条带的校验和信息并写入到相应磁盘上,所以RAID阵列有一个冗余组数据同步的初始化过程(resync)。但是MD设备只要创建成功后即可对外被上层应用读写使用,当然由于上层数据读写降低数据同步的性能。初始化的时间长短和磁盘阵列自身性能以及读写的应用负载相关,使用cat /proc/mdstat信息查询RAID阵列当前重构的速度和预期的完成时间。

|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid10] md0 : active raid10 sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] [===>...........] resync = 15.3% (483072/3145536) finish=0.3min speed=120768K/sec unused devices: <none> [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid10] md0 : active raid10 sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] unused devices: <none> |

如果一个块设备已经正在被其他的MD设备或者文件系统使用,则不能用来创建新的MD设备。

|----------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -C /dev/md1 -l0 -n2 /dev/sdh /dev/sdi mdadm: Cannot open /dev/sdh: Device or resource busy mdadm: create aborted |

Build模式可以用来创建没有元数据的RAID0/1设备,不能创建RAID4/5/6/10等带有冗余级别的MD设备。

|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -BR /dev/md0 -l0 -n6 /dev/sd[b-g] mdadm: array /dev/md0 built and started. [root@fc5 mdadm-2.6.3]# ./mdadm -BR /dev/md0 -l1 -n2 /dev/sd[b-c] mdadm: array /dev/md0 built and started. [root@fc5 mdadm-2.6.3]# ./mdadm -BR /dev/md0 -l5 -n6 /dev/sd[b-g] mdadm: Raid level 5 not permitted with --build. [root@fc5 mdadm-2.6.3]# ./mdadm -BR /dev/md0 --l6 -n6 /dev/sd[b-g] mdadm: Raid level 5 not permitted with --build. [root@fc5 mdadm-2.6.3]# ./mdadm -BR /dev/md0 --l10 -n6 /dev/sd[b-g] mdadm: Raid level 10 not permitted with --build. |

使用阵列:

MD设备可以像普通块设备那样直接读写,也可以做文件系统格式化。

|-------------------------------------------------------------------------|
| #mkfs.ext3 /dev/md0 #mkdir -p /mnt/md-test #mount /dev/md0 /mnt/md-test |

停止正在运行的阵列:

当阵列没有文件系统或者其他存储应用以及高级设备使用的话,可以使用--stop(或者其缩写-S)停止阵列;如果命令返回设备或者资源忙类型的错误,说明/dev/md0正在被上层应用使用,暂时不能停止,必须要首先停止上层的应用,这样也能保证阵列上数据的一致性。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm --stop /dev/md0 mdadm: fail to stop array /dev/md0: Device or resource busy [root@fc5 mdadm-2.6.3]# umount /dev/md0 [root@fc5 mdadm-2.6.3]#./mdadm --stop /dev/md0 mdadm: stopped /dev/md0 |

++2.3++++组装曾创建过的阵列++

模式--assemble或者其缩写(-A)主要是检查底层设备的元数据信息,然后再组装为活跃的阵列。如果我们已经知道阵列由那些设备组成,可以指定使用那些设备来启动阵列。

|--------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -A /dev/md0 /dev/sd[b-h] mdadm: /dev/md0 has been started with 6 drives and 1 spare. |

如果有配置文件(/etc/mdadm.conf)可使用命令mdadm -As /dev/md0。mdadm先检查mdadm.conf中的DEVICE信息,然后从每个设备上读取元数据信息,并检查是否和ARRAY信息一致,如果信息一致则启动阵列。如果没有配置/etc/mdadm.conf文件,而且又不知道阵列由那些磁盘组成,则可以使用命令--examine(或者其缩写-E)来检测当前的块设备上是否有阵列的元数据信息。

|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -E /dev/sdi mdadm: No md superblock detected on /dev/sdi. [root@fc5 mdadm-2.6.3]# ./mdadm -E /dev/sdb /dev/sdb: Magic : a92b4efc Version : 00.90.00 UUID : 0cabc5e5:842d4baa:e3f6261b:a17a477a Creation Time : Sun Aug 22 17:49:53 1999 Raid Level : raid10 Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB) Array Size : 3145536 (3.00 GiB 3.22 GB) Raid Devices : 6 Total Devices : 7 Preferred Minor : 0 Update Time : Sun Aug 22 18:05:56 1999 State : clean Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Checksum : 2f056516 - correct Events : 0.4 Layout : near=2, far=1 Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 16 0 active sync /dev/sdb 0 0 8 16 0 active sync /dev/sdb 1 1 8 32 1 active sync /dev/sdc 2 2 8 48 2 active sync /dev/sdd 3 3 8 64 3 active sync /dev/sde 4 4 8 80 4 active sync /dev/sdf 5 5 8 96 5 active sync /dev/sdg 6 6 8 112 6 spare /dev/sdh |

从上面命令结果可以找到阵列的唯一标识UUID和阵列包含的设备名字,然后再使用上面的命令来组装阵列,也可以使用UUID标识来组装阵列。没有一致的元数据的信息设备(例如/dev/sda和/dev/sda1等)mdadm程序会自动跳过。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -Av --uuid=0cabc5e5:842d4baa:e3f6261b:a17a477a /dev/md0 /dev/sd* mdadm: looking for devices for /dev/md0 mdadm: no recogniseable superblock on /dev/sda mdadm: /dev/sda has wrong uuid. mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has wrong uuid. mdadm: no RAID superblock on /dev/sdi mdadm: /dev/sdi has wrong uuid. mdadm: /dev/sdi1 has wrong uuid. mdadm: no RAID superblock on /dev/sdj mdadm: /dev/sdj has wrong uuid. mdadm: /dev/sdj1 has wrong uuid. mdadm: no RAID superblock on /dev/sdk mdadm: /dev/sdk has wrong uuid. mdadm: /dev/sdk1 has wrong uuid. mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0. mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1. mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2. mdadm: /dev/sde is identified as a member of /dev/md0, slot 3. mdadm: /dev/sdf is identified as a member of /dev/md0, slot 4. mdadm: /dev/sdg is identified as a member of /dev/md0, slot 5. mdadm: /dev/sdh is identified as a member of /dev/md0, slot 6. mdadm: added /dev/sdc to /dev/md0 as 1 mdadm: added /dev/sdd to /dev/md0 as 2 mdadm: added /dev/sde to /dev/md0 as 3 mdadm: added /dev/sdf to /dev/md0 as 4 mdadm: added /dev/sdg to /dev/md0 as 5 mdadm: added /dev/sdh to /dev/md0 as 6 mdadm: added /dev/sdb to /dev/md0 as 0 mdadm: /dev/md0 has been started with 6 drives and 1 spare. |

配置文件:

/etc/mdadm.conf作为默认的配置文件,主要作用是方便跟踪软RAID的配置,尤其是可以配置监视和事件上报选项。Assemble命令也可以使用--config(或者其缩写-c)来指定配置文件。我们通常可以如下命令来建立配置文件。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]#echo DEVICE /dev/sd[b-h] /dev/sd[i-k]1 > /etc/mdadm.conf [root@fc5 mdadm-2.6.3]# ./mdadm -Ds >>/etc/mdadm.conf [root@fc5 mdadm-2.6.3]# cat /etc/mdadm.conf DEVICE /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi1 /dev/sdj1 /dev/sdk1 ARRAY /dev/md1 level=raid0 num-devices=3 UUID=dcff6ec9:53c4c668:58b81af9:ef71989d ARRAY /dev/md0 level=raid10 num-devices=6 spares=1 UUID=0cabc5e5:842d4baa:e3f6261b:a17a477a |

使用配置文件启动阵列时,mdadm会查询配置文件中的设备和阵列内容,然后启动运行所有能运行RAID阵列。如果指定阵列的设备名字,则只启动对应的阵列。

|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -As mdadm: /dev/md1 has been started with 3 drives. mdadm: /dev/md0 has been started with 6 drives and 1 spare. [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid0] [raid10] md0 : active raid10 sdb[0] sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] 3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] md1 : active raid0 sdi1[0] sdk1[2] sdj1[1] 7337664 blocks 32k chunks unused devices: <none> [root@fc5 mdadm-2.6.3]# ./mdadm -S /dev/md0 /dev/md1 mdadm: stopped /dev/md0 mdadm: stopped /dev/md1 [root@fc5 mdadm-2.6.3]# ./mdadm -As /dev/md0 mdadm: /dev/md0 has been started with 6 drives and 1 spare. [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid0] [raid10] md0 : active raid10 sdb[0] sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] 3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU] unused devices: <none> |

++2.4++++查询阵列的状态++

我们可以通过cat /proc/mdstat信息查看所有运行的RAID阵列的状态,在第一行中首先是MD的设备名,active和inactive选项表示阵列是否能读写,接着是阵列的RAID级别,后面是属于阵列的块设备,方括号[]里的数字表示设备在阵列中的序号,(S)表示其是热备盘,(F)表示这个磁盘是faulty状态。在第二行中首先是阵列的大小,单位是KB,接着是chunk-size的大小,然后是layout类型,不同RAID级别的layout类型不同,[6/6]和[UUUUUU]表示阵列有6个磁盘并且6个磁盘都是正常运行的,而[5/6]和[_UUUUU] 表示阵列有6个磁盘中5个都是正常运行的,下划线对应的那个位置的磁盘是faulty状态的。

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid5 sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 5242560 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] unused devices: <none> [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 -f /dev/sdh /dev/sdb mdadm: set /dev/sdh faulty in /dev/md0 mdadm: set /dev/sdb faulty in /dev/md0 [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md0 : active raid5 sdh[6](F) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[7](F) 5242560 blocks level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU] unused devices: <none> |

如果Linux系统目前支持sysfs也可以访问/sys/block/md0目录查询阵列信息。

|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ls -l /sys/block/md0/ capability holders range size stat uevent dev md removable slaves subsystem [root@fc5 mdadm-2.6.3]# ls /sys/block/md0/md/ array_state dev-sdg rd1 suspend_lo bitmap_set_bits dev-sdh rd2 sync_action chunk_size layout rd3 sync_completed component_size level rd4 sync_speed dev-sdb metadata_version rd5 sync_speed_max dev-sdc mismatch_cnt reshape_position sync_speed_min dev-sdd new_dev resync_start dev-sde raid_disks safe_mode_delay dev-sdf rd0 suspend_hi [root@fc5 mdadm-2.6.3]# ls /sys/block/md0/slaves/ sdb sdc sdd sde sdf sdg sdh |

我们也可以通过mdadm命令查看指定阵列的简要信息(使用--query或者其缩写-Q)和详细信息(使用--detail或者其缩写-D) 详细信息包括RAID的版本、创建的时间、RAID级别、阵列容量、可用空间、设备数量、超级块状态、更新时间、UUID信息、各个设备的状态、RAID算法级别类型和布局方式以及块大小等信息。设备状态信息分为active, sync, spare, faulty, rebuilding, removing等等。

|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm --query /dev/md0 /dev/md0: 2.100GiB raid10 6 devices, 1 spare. Use mdadm --detail for more detail. [root@fc5 mdadm-2.6.3]# ./mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 22 17:49:53 1999 Raid Level : raid10 Array Size : 3145536 (3.00 GiB 3.22 GB) Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB) Raid Devices : 6 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Aug 22 21:55:02 1999 State : clean Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Layout : near=2, far=1 Chunk Size : 64K UUID : 0cabc5e5:842d4baa:e3f6261b:a17a477a Events : 0.122 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 80 4 active sync /dev/sdf 5 8 96 5 active sync /dev/sdg 6 8 112 - spare /dev/sdh |

++2.5++++管理阵列++

mdadm可以在Manage模式下,对运行中的阵列进行添加及删除磁盘。常用于标识failed磁盘,增加spare(热备)磁盘,以及从阵列中移走已经失效的磁盘等等。使用--fail(或者其缩写-f)指定磁盘损坏。

|---------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 --fail /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 |

当磁盘已经损坏时,使用--remove(或者其缩写--f)参数将这个磁盘从磁盘阵列中移走;但如果设备还正在被阵列使用,则不能从阵列中移走。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 --remove /dev/sdb mdadm: hot removed /dev/sdb [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 --remove /dev/sde mdadm: hot remove failed for /dev/sde: Device or resource busy |

如果阵列带有spare磁盘,那么自动将损坏磁盘上的数据重构到新的spare磁盘上;

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -f /dev/md0 /dev/sdb ; cat /proc/mdstat mdadm: set /dev/sdb faulty in /dev/md0 Personalities : [raid0] [raid10] md0 : active raid10 sdh[6] sdb[7](F) sdc[0] sdg[5] sdf[4] sde[3] sdd[2] 3145536 blocks 64K chunks 2 near-copies [6/5] [U_UUUU] [=======>........] recovery = 35.6% (373888/1048512) finish=0.1min speed=93472K/sec unused devices: <none> |

如果阵列没有热备磁盘,可以使用--add(或者其缩写-a)参数增加热备磁盘

|---------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 --add /dev/sdh mdadm: added /dev/sdh |

++2++++.++ ++6++ ++监控阵列++

可以使用mdadm对RAID阵列进行监控,监控程序定时查询指定的事件是否发生,然后根据配置来妥善处理。例如当阵列中的磁盘设备出现问题的时候,可以发送邮件给管理员;或者当磁盘出现问题的时候由回调程序来进行自动的磁盘替换,所有监控事件都可以记录到系统日志中。目前mdadm支持的事件有RebuildStarted, RebuildNN(NN is 20, 40, 60, or 80), RebuildFinished, Fail,FailSpare,SpareActive,NewArray, DegradedArray, MoveSpare, SparesMissing, TestMessage。

如果配置每300秒mdadm监控进程查询MD设备一次,当阵列出现错误,会发送邮件给指定的用户,执行事件处理的程序并且记录上报的事件到系统的日志文件。使用--daemonise参数(或者其缩写-f)使程序持续在后台运行。如果要发送邮件需要sendmail程序运行,当邮件地址被配置为外网地址应先测试是否能发送出去。

|----------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]#./mdadm --monitor --mail=root@localhost --program=/root/md.sh --syslog --delay=300 /dev/md0 --daemonise |

查看系统日志信息,可以看到哪个阵列或者阵列中的哪个设备发生过的哪些事件。

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# mdadm -f /dev/md0 /dev/sdb mdadm: set /dev/sdb faulty in /dev/md0 [root@fc5 mdadm-2.6.3]#tail --f /var/log/messages Aug 22 22:04:12 fc5 mdadm: RebuildStarted event detected on md device /dev/md0 Aug 22 22:04:12 fc5 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. Aug 22 22:04:12 fc5 kernel: md: using 128k window, over a total of 1048512 blocks. Aug 22 22:04:14 fc5 mdadm: Fail event detected on md device /dev/md0, component device /dev/sdb Aug 22 22:04:14 fc5 mdadm: Rebuild80 event detected on md device /dev/md0 Aug 22 22:04:16 fc5 mdadm: RebuildFinished event detected on md device /dev/md0 Aug 22 22:04:16 fc5 mdadm: SpareActive event detected on md device /dev/md0, component device /dev/sdh Aug 22 22:04:16 fc5 kernel: md: md0: recovery done. |

回调程序从mdadm程序接受两个或者三个参数:事件名字,监控阵列的名字和特殊事件可能使用到的底层块设备名字。上面的事件返回的信息如下:

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Eventname: RebuildStarted Device: /dev/md0 next: Eventname: Fail Device: /dev/md0 next: /dev/sdb Eventname: Rebuild80 Device: /dev/md0 next: Eventname: RebuildFinished Device: /dev/md0 next: Eventname:SpareActive Device: /dev/md0 next: /dev/sdh |

++2++++.++ ++7++ ++扩展阵列++

如果在创建阵列时不想使用整个块设备,可以指定用于创建RAID阵列每个块设备使用的设备大小。

|----------------------------------------------------------------------|
| mdadm -CR /dev/md0 -l5 -n6 /dev/sd[b-g] -x1 /dev/sdh --size=102400 |

然后在阵列需要扩展大小时,使用模式--grow(或者其缩写-Q)以及--size参数(或者其缩写-z) 在加上合适的大小数值就能分别扩展阵列所使用每个块设备的大小。

|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -Q /dev/md0 /dev/md0: 500.00MiB raid5 6 devices, 1 spare. Use mdadm --detail for more detail. [root@fc5 mdadm-2.6.3]# ./mdadm --grow /dev/md0 --size=204800 [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid0] [raid10] [raid6] [raid5] [raid4] md0 : active raid5 sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 1024000 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] [============>......] resync = 69.6% (144188/204800) finish=0.0min speed=10447K/sec unused devices: <none> [root@fc5 mdadm-2.6.3]# ./mdadm -Q /dev/md0 /dev/md0: 1000.00MiB raid5 6 devices, 1 spare. Use mdadm --detail for more detail. |

如果上面是文件系统(ext2,ext3, reiserfs),在设备大小扩展后,文件系统也要同时扩展。

|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# df -h | grep md /dev/md0 485M 11M 449M 3% /mnt/md-test [root@fc5 mdadm-2.6.3]# ext2online /dev/md0 [root@fc5 mdadm-2.6.3]# df -h | grep md /dev/md0 969M 11M 909M 2% /mnt/md-test |

mdadm还提供增加或减少阵列中设备个数的功能(reshape),使用模式---grow和 --raid-disks(或者其缩写-n)参数再加上合适的设备个数。扩展后阵列中原来的热备盘变为活跃磁盘,所以阵列的设备个数增加,阵列的大小也相应增加。

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 22 22:16:19 1999 Raid Level : raid5 Array Size : 1024000 (1000.17 MiB 1048.58 MB) Used Dev Size : 204800 (200.03 MiB 209.72 MB) Raid Devices : 6 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Aug 22 22:23:46 1999 State : clean Active Devices : 6 Working Devices : 7 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K UUID : 53e6395c:1af16258:087cb2a0:b66b087f Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 80 4 active sync /dev/sdf 5 8 96 5 active sync /dev/sdg 6 8 112 - spare /dev/sdh [root@fc5 mdadm-2.6.3]# ./mdadm --grow /dev/md0 --raid-disks=7 mdadm: Need to backup 1920K of critical section.. mdadm: ... critical section passed. [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid0] [raid10] [raid6] [raid5] [raid4] md0 : active raid5 sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 1024000 blocks super 0.91 level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU] [===>.............] reshape = 19.4% (40256/204800) finish=0.7min speed=3659K/sec unused devices: <none> [root@fc5 mdadm-2.6.3]# ./mdadm -D /dev/md0 /dev/md0: Version : 00.91.03 Creation Time : Sun Aug 22 22:16:19 1999 Raid Level : raid5 Array Size : 1024000 (1000.17 MiB 1048.58 MB) Used Dev Size : 204800 (200.03 MiB 209.72 MB) Raid Devices : 7 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Aug 22 22:26:46 1999 State : clean, recovering Active Devices : 7 Working Devices : 7 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Reshape Status : 25% complete Delta Devices : 1, (6->7) UUID : 53e6395c:1af16258:087cb2a0:b66b087f Events : 0.76 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 80 4 active sync /dev/sdf 5 8 96 5 active sync /dev/sdg 6 8 112 6 active sync /dev/sdh [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid0] [raid10] [raid6] [raid5] [raid4] md0 : active raid5 sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 1228800 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU] unused devices: <none> [root@fc5 mdadm-2.6.3]# ./mdadm -D /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Aug 22 22:16:19 1999 Raid Level : raid5 Array Size : 1228800 (1200.20 MiB 1258.29 MB) Used Dev Size : 204800 (200.03 MiB 209.72 MB) Raid Devices : 7 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Aug 22 22:37:11 1999 State : clean Active Devices : 7 Working Devices : 7 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 53e6395c:1af16258:087cb2a0:b66b087f Events : 0.204 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde 4 8 80 4 active sync /dev/sdf 5 8 96 5 active sync /dev/sdg 6 8 112 6 active sync /dev/sdh |

++2.8Bitmap++++记录++

使用bitmap模式记录RAID阵列有多少个块已经同步(resync)。参数--bitmap(或者其缩写-b)指定记录bitmap信息的文件名,如果是interval参数表示bitmap记录在每个设备的元数据区。--bitmap-chunk表示每个bit位代表RAID设备多大的数据块,单位是KB;而--delay(或者其缩写-d)指定多长事件同步bitmap信息到文件或者设备上,单位是秒,默认是5秒。--force(或者其缩写)表示覆盖掉已经存在bitmap文件。而且使用--examine-bitmap(或者其缩写-X)能够查看存储在文件或者设备元数据中的bitmap记录的信息。

当阵列创建时指定bitmap模式,如果阵列初始化中停止阵列,当再次启动阵列中,RAID阵列能够利用bitmap记录从上次中断的位置接着执行。

|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm -CR /dev/md1 -l1 -n2 /dev/sdi1 /dev/sdj1 --bitmap=internal mdadm: array /dev/md1 started. [root@fc5 tests]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sdj1[1] sdi1[0] 2096384 blocks [2/2] [UU] [========>......] resync = 51.2% (1075072/2096384) finish=0.1min speed=153581K/sec bitmap: 128/128 pages [512KB], 8KB chunk unused devices: <none> [root@fc5 tests]# ./mdadm -X /dev/sdi1 Filename : /dev/sdi1 Magic : 6d746962 Version : 4 UUID : bcccddb7:0f529abd:672e1f66:7e68bbc8 Events : 1 Events Cleared : 1 State : OK Chunksize : 8 KB Daemon : 5s flush period Write Mode : Normal Sync Size : 2096384 (2047.59 MiB 2146.70 MB) Bitmap : 262048 bits (chunks), 262048 dirty (100.0%) [root@fc5 tests]# ./mdadm --stop /dev/md1 mdadm: stopped /dev/md1 [root@fc5 tests]# ./mdadm -A /dev/md1 /dev/sd[i-k]1 --bitmap=internal ; cat /proc/mdstat mdadm: there is no need to specify --bitmap when assembling arrays with internal bitmaps mdadm: /dev/md1 has been started with 2 drives and 1 spare. Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sdi1[0] sdk1[2](S) sdj1[1] 1048448 blocks [2/2] [UU] [==============>...] resync = 87.6% (919616/1048448) finish=0.0min speed=89408K/sec bitmap: 27/128 pages [108KB], 4KB chunk unused devices: <none> [root@fc5 tests]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sdj1[1] sdi1[0] 2096384 blocks [2/2] [UU] bitmap: 0/128 pages [0KB], 8KB chunk [root@fc5 tests]# mdadm -X /dev/sdi1 unused devices: <none> Filename : /dev/sdi1 Magic : 6d746962 Version : 4 UUID : bcccddb7:0f529abd:672e1f66:7e68bbc8 Events : 4 Events Cleared : 4 State : OK Chunksize : 8 KB Daemon : 5s flush period Write Mode : Normal Sync Size : 2096384 (2047.59 MiB 2146.70 MB) Bitmap : 262048 bits (chunks), 0 dirty (0.0%) |

使用bitmap文件记录要求这个文件不能在RAID阵列上或者其相关的设备上,而且使用assemble命令时要指定bitmap文件名字。

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]#./mdadm -CR /dev/md0 -l5 -n6 /dev/sd[b-g] -x1 /dev/sdh --bitmap=/tmp/md0-bm --bitmap-chunk=4 --delay=1 --force mdadm: array /dev/md0 started. [root@fc5 mdadm-2.6.3]# cat /proc/mdstat ; ./mdadm -X /tmp/md0-bm Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 5242560 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] [===========>.......] resync = 64.3% (675748/1048512) finish=0.7min speed=7848K/sec bitmap: 128/128 pages [512KB], 4KB chunk, file: /tmp/md0-bm unused devices: <none> Filename : /tmp/md0-bm Magic : 6d746962 Version : 4 UUID : d2f46320:40f1e154:08d7a21a:4cc9a9c1 Events : 1 Events Cleared : 1 State : OK Chunksize : 4 KB Daemon : 1s flush period Write Mode : Normal Sync Size : 1048512 (1024.11 MiB 1073.68 MB) Bitmap : 262128 bits (chunks), 262128 dirty (100.0%) [root@fc5 mdadm-2.6.3]# ./mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@fc5 mdadm-2.6.3]# ./mdadm -A /dev/md0 /dev/sd[b-h] --bitmap=/tmp/md0-bm ; cat /proc/mdstat ; ./mdadm -X /tmp/md0-bm mdadm: /dev/md0 has been started with 6 drives and 1 spare. Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] 5242560 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] [=============>.....] resync = 70.5% (739884/1048512) finish=0.7min speed=6539K/sec bitmap: 41/128 pages [164KB], 4KB chunk, file: /tmp/md0-bm unused devices: <none> Filename : /tmp/md0-bm Magic : 6d746962 Version : 4 UUID : d2f46320:40f1e154:08d7a21a:4cc9a9c1 Events : 3 Events Cleared : 3 State : OK Chunksize : 4 KB Daemon : 1s flush period Write Mode : Normal Sync Size : 1048512 (1024.11 MiB 1073.68 MB) Bitmap : 262128 bits (chunks), 83696 dirty (31.9%) [root@fc5 mdadm-2.6.3]# cat /proc/mdstat ; ./mdadm -X /tmp/md0-bm Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] 5242560 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 0/128 pages [0KB], 4KB chunk, file: /tmp/md0-bm unused devices: <none> Filename : /tmp/md0-bm Magic : 6d746962 Version : 4 UUID : d2f46320:40f1e154:08d7a21a:4cc9a9c1 Events : 6 Events Cleared : 6 State : OK Chunksize : 4 KB Daemon : 1s flush period Write Mode : Normal Sync Size : 1048512 (1024.11 MiB 1073.68 MB) Bitmap : 262128 bits (chunks), 0 dirty (0.0%) |

bitmap模式在阵列处于降级(degrade)状态能够记录有哪些块被写过,当那个暂时失效的磁盘使用--re-add参数被重新添加后,阵列只重构这期间修改的数据块,减少阵列重构的时间。bitmap信息中dirty的数量表示降级期间被修改过的块。

|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 -f /dev/sdb /dev/sdh mdadm: set /dev/sdb faulty in /dev/md0 mdadm: set /dev/sdh faulty in /dev/md0 [root@fc5 mdadm-2.6.3]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdh[6](F) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[7](F) 5242560 blocks level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU] bitmap: 0/128 pages [0KB], 4KB chunk, file: /tmp/md0-bm unused devices: <none> [root@fc5 mdadm-2.6.3]# ./mdadm -X /tmp/md0-bm Filename : /tmp/md0-bm Magic : 6d746962 Version : 4 UUID : 3ede3bc0:adb1a404:49a18eed:f1b5c89a Events : 8 Events Cleared : 1 State : OK Chunksize : 4 KB Daemon : 1s flush period Write Mode : Normal Sync Size : 1048512 (1024.11 MiB 1073.68 MB) Bitmap : 262128 bits (chunks), 0 dirty (0.0%) [root@fc5 mdadm-2.6.3]# dd if=/dev/zero f=/dev/md0 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 11.9995 seconds, 89.5 MB/s [root@fc5 mdadm-2.6.3]# ./mdadm -X /tmp/md0-bm Filename : /tmp/md0-bm Magic : 6d746962 Version : 4 UUID : 3ede3bc0:adb1a404:49a18eed:f1b5c89a Events : 10 Events Cleared : 1 State : OK Chunksize : 4 KB Daemon : 1s flush period Write Mode : Normal Sync Size : 1048512 (1024.11 MiB 1073.68 MB) Bitmap : 262128 bits (chunks), 52432 dirty (20.0%) [root@fc5 mdadm-2.6.3]# ./mdadm /dev/md0 -r /dev/sdb --re-add /dev/sdb [root@fc5 mdadm-2.6.3]# cat /proc/mdstat ; ./mdadm -X /tmp/md0-bm Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb[0] sdh[6](F) sdg[5] sdf[4] sde[3] sdd[2] sdc[1] 5242560 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 0/128 pages [0KB], 4KB chunk, file: /tmp/md0-bm unused devices: <none> Filename : /tmp/md0-bm Magic : 6d746962 Version : 4 UUID : 3ede3bc0:adb1a404:49a18eed:f1b5c89a Events : 24 Events Cleared : 24 State : OK Chunksize : 4 KB Daemon : 1s flush period Write Mode : Normal Sync Size : 1048512 (1024.11 MiB 1073.68 MB) Bitmap : 262128 bits (chunks), 0 dirty (0.0%) |

相关推荐
ben9518chen8 分钟前
嵌入式Linux C语言程序设计九
linux·c语言
wuk99828 分钟前
CentOS7环境搭建L2TP服务器
运维·服务器
恒创科技HK29 分钟前
香港1核2G云服务器当网站服务器够用不?
运维·服务器
IT 小阿姨(数据库)1 小时前
PostgreSQL 之上的开源时序数据库 TimescaleDB 详解
运维·数据库·sql·postgresql·开源·centos·时序数据库
颜大哦1 小时前
linux安装mysql
linux·运维·mysql·adb
学习3人组2 小时前
Node.js 网站服务器开发
运维·服务器·node.js
来知晓2 小时前
Linux:WSL内存空间管理之清完内存C盘可用空间不增问题解决
linux·运维·服务器
大聪明-PLUS2 小时前
嵌入式 Linux 初学者指南 – 第 2 部分
linux·嵌入式·arm·smarc
GTgiantech2 小时前
科普SFP 封装光模块教程
服务器·网络·数据库
深圳市恒讯科技2 小时前
如何在服务器上安装和配置数据库(如MySQL)?
服务器·数据库·mysql