Raid 5 (软raid)创建、模拟损坏、移除、添加新磁盘的实验
Raid 5 (软raid)创建、模拟损坏、移除、添加新磁盘的实验
1.查看磁盘分区,做raid5 最少需要3块盘
磁盘 /dev/sdb:1073 MB, 1073741824 字节,2097152 个扇区 Units = 扇区 of 1 * 512 = 512 bytes 扇区大小(逻辑/物理):512 字节 / 512 字节 I/O 大小(最小/最佳):512 字节 / 512 字节 磁盘标签类型:dos 磁盘标识符:0x71dbe158 设备 Boot Start End Blocks Id System /dev/sdb1 2048 206847 102400 83 Linux /dev/sdb2 206848 411647 102400 83 Linux /dev/sdb3 411648 616447 102400 83 Linux
2.用mdadm 命令创建raid5 阵列
[root@bogon ~]# mdadm -C /dev/md5 -n3 -l5 /dev/sdb{1,2,3} mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started.
3.查看raid5阵列信息,可以看到我们的raid5 已经创建完毕
[root@bogon ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Wed Jun 12 22:38:30 2019 Raid Level : raid5 Array Size : 200704 (196.00 MiB 205.52 MB) Used Dev Size : 100352 (98.00 MiB 102.76 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Wed Jun 12 22:38:32 2019 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : bogon:5 (local to host bogon) UUID : 4b0810bc:460a99c0:9d06b842:8ebfcad9 Events : 18 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 3 8 19 2 active sync /dev/sdb3
4.对其创建文件系统,在这里我选择的是xfs格式,并进行挂载操作
[root@bogon ~]# mkfs.xfs /dev/md5 meta-data=/dev/md5 isize=512 agcount=8, agsize=6272 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=50176, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=624, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
挂载
[root@bogon ~]# mount /dev/md5 /mnt/disk1/
查看挂载是否成功
[root@bogon ~]# df -Th 文件系统 类型 容量 已用 可用 已用% 挂载点 /dev/mapper/centos-root xfs 17G 4.5G 13G 27% / devtmpfs devtmpfs 470M 0 470M 0% /dev tmpfs tmpfs 487M 0 487M 0% /dev/shm tmpfs tmpfs 487M 8.6M 478M 2% /run tmpfs tmpfs 487M 0 487M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 166M 849M 17% /boot tmpfs tmpfs 98M 8.0K 98M 1% /run/user/42 tmpfs tmpfs 98M 28K 98M 1% /run/user/0 /dev/sr0 iso9660 4.3G 4.3G 0 100% /run/media/root/CentOS 7 x86_64 /dev/md5 xfs 194M 11M 184M 6% /mnt/disk1
5.模拟磁盘损坏
[root@bogon ~]# mdadm -f /dev/md5 /dev/sdb3 mdadm: set /dev/sdb3 faulty in /dev/md5
查看raid5信息
[root@bogon ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Wed Jun 12 22:38:30 2019 Raid Level : raid5 Array Size : 200704 (196.00 MiB 205.52 MB) Used Dev Size : 100352 (98.00 MiB 102.76 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Wed Jun 12 22:51:07 2019 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : bogon:5 (local to host bogon) UUID : 4b0810bc:460a99c0:9d06b842:8ebfcad9 Events : 20 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 - 0 0 2 removed 3 8 19 - faulty /dev/sdb3
移除损坏磁盘,我们可以看到sdb3 已经从raid5中移除了
[root@bogon ~]# mdadm -r /dev/md5 /dev/sdb3 mdadm: hot removed /dev/sdb3 from /dev/md5
Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 - 0 0 2 removed
6.添加新的额磁盘,并查看
[root@bogon ~]# mdadm -a /dev/md5 /dev/sdb3
mdadm: added /dev/sdb3
raid5中,出现了我们添加的sdb3
Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2 3 8 19 2 active sync /dev/sdb3