Well, let's limit the damage to insure that Volume0 and Volume1 uses the same disks so the others can sleep.
Warning, this must be done at each boot. This may be automated in a future version of ADM, we are at 3.5.2RAG2version.
1) Find any volume that is not yours.
ls -l /dev/md*
You'll find md0 but also md126 in my case. So the operation must be done on each of those volumes. From now, I'll give you my example. It's up to you to infer for your own setup.
2) Get the disks that never sleeps from volume1.
Code: Select all
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Jul 18 09:07:12 2015
Raid Level : raid5
Array Size : 15618877440 (14895.32 GiB 15993.73 GB)
Used Dev Size : 7809438720 (7447.66 GiB 7996.87 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Nov 7 01:55:28 2020
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : AS6508T-D0C0:1 (local to host AS6508T-D0C0)
UUID : c7b86c6c:0679c9a4:0562baab:4c420d30
Events : 17390
Number Major Minor RaidDevice State
3 8 36 0 active sync /dev/sdc4
2 8 52 1 active sync /dev/sdd4
4 8 84 2 active sync /dev/sdf4
3) Look at your md0 configuration
Code: Select all
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Jul 18 09:06:58 2015
Raid Level : raid1
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
Raid Devices : 8
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Thu Nov 5 08:41:19 2020
State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Name : AS6508T-D0C0:0 (local to host AS6508T-D0C0)
UUID : f68bd32e:6c63499d:34fbd66a:b7abf589
Events : 69721
Number Major Minor RaidDevice State
8 8 34 0 active sync /dev/sdc2
11 8 66 1 active sync /dev/sde2
9 8 50 2 active sync /dev/sdd2
10 8 2 3 active sync /dev/sda2
12 8 18 4 active sync /dev/sdb2
13 8 114 5 active sync /dev/sdh2
14 8 98 6 active sync /dev/sdg2
14 0 0 14 removed
Code: Select all
mdadm /dev/md0 --fail /dev/sde2 /dev/sda2 /dev/sdb2 /dev/sdh2 /dev/sdg2
mdadm /dev/md0 --remove /dev/sde2 /dev/sda2 /dev/sdb2 /dev/sdh2 /dev/sdg2
mdadm /dev/md0 --add /dev/sdf2
For this you need to reduce the size of the array if not, any spare will be immediately added as a true disk since the array would be considered as incomplete
Code: Select all
mdadm --grow /dev/md0 --raid-devices=3
Code: Select all
mdadm /dev/md0 --add-spare /dev/sde2 /dev/sda2 /dev/sdb2 /dev/sdh2 /dev/sdg2
Code: Select all
# mdadm --detail /dev/md/dev/md0:
Version : 1.2
Creation Time : Sat Jul 18 09:06:58 2015
Raid Level : raid1
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
Raid Devices : 3
Total Devices : 8
Persistence : Superblock is persistent
Update Time : Thu Nov 5 08:55:51 2020
State : clean
Active Devices : 3
Working Devices : 8
Failed Devices : 0
Spare Devices : 5
Name : AS6508T-D0C0:0 (local to host AS6508T-D0C0)
UUID : f68bd32e:6c63499d:34fbd66a:b7abf589
Events : 69778
Number Major Minor RaidDevice State
8 8 34 0 active sync /dev/sdc2
10 8 82 1 active sync /dev/sdf2
9 8 50 2 active sync /dev/sdd2
3 8 66 - spare /dev/sde2
4 8 2 - spare /dev/sda2
5 8 18 - spare /dev/sdb2
6 8 114 - spare /dev/sdh2
7 8 98 - spare /dev/sdg2
Now the "uncontrolled" activities of volume0 and volume1 will only prevent the same three disks (in my case) to go to sleep. The others will go to sleep if they can.