Check the status of md1
[root@rhel1 ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Wed Dec 7 19:05:53 2016 Raid Level : raid5 Array Size : 2087936 (2039.34 MiB 2138.05 MB) Used Dev Size : 1043968 (1019.67 MiB 1069.02 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Wed Dec 7 19:07:26 2016 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : rhel1.lab.com:1 (local to host rhel1.lab.com) UUID : fcef1f48:223e87c7:53e9eecc:d5f55e79 Events : 19 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 3 8 65 2 active sync /dev/sde1 1 8 49 - faulty spare /dev/sdd1
you can see that /dev/sdd1 is marked as faulty now remove the same from configuration using below command:
[root@rhel1 ~]# mdadm /dev/md1 -r /dev/sdd1 mdadm: hot removed /dev/sdd1 from /dev/md1
When you have replace Faulty Linux RAID disk, you need to make it active in the array. First, you need to partition the disk like you did originally when setting up the RAID array.
Leave a Reply