When I bought my AS6604T I skipped buying any NVMes as I figured I could always add them later on and migrate the content. Due to, what appears to be well know issues with constant disk activity, I decided to do this now rather than wait. As I currently has no external backup of my data, I would like to do this without any data loss. I do not mind doing things manually via CLI but I am a little in the dark when it comes to determining a decent approach for this. On any other Linux based system, normally I would just clone and manipulate partitions and update fstab as needed but I have not manually handled RAID arrays before. I have searched the forums but have not yet found a detailed description of how to go about this. I have 4 disks configured as a RAID 6 array, details are below.
I am hoping it is as simple as the following (which is a little fuzzy on the details) but I am expecting not:
- boot alternative OS from USB (Ubuntu 21.04 seems to support all hardware including the NICs/ethernet on the AS6604T)
- manually create a RAID 1 of the added NVMes
- create a partition a on each NVMe matching a partition under /dev/md126 (swap)
- create a partition a on each NVMe matching a partition under /dev/md0 (/volume0) to each NVMe
- enable and mount the new volume0 replacement and copy needed data from /dev/md0 to it
- update configurations files as needed
- manually delete unneeded partitions under old swap and old /volume0
- for each disk
- manually grow data partion to fill entire disk
- let mdam rebuild/resync the RAID array
- reboot into ADM
- temporaryly remove the existing disks
- add the NVMes
- boot the NAS and run through the initial setup
- boot live USB
- add old /volume1 as /volume2 to config
- update configuration files as needed regarding apps, shares etc.
- reboot
I would like to hear your thoughts on this and any suggestions would be appreciated.
# Related information
Code: Select all
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid6 sdb4[0] sdd4[3] sdc4[2] sda4[1]
7804860416 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid1 sdb3[0] sdd3[3] sdc3[2] sda3[1]
2095104 blocks super 1.2 [4/4] [UUUU]
md0 : active raid1 sdb2[0] sdd2[3] sdc2[2] sda2[1]
2095104 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
Code: Select all
# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=AS6604T-9BE5:0 UUID=ba74ff1a:9e85a8a3:928042d3:818f91b1
ARRAY /dev/md126 metadata=1.2 name=asnas:126 UUID=e8dda194:e95b4870:7864b217:3cc62540
ARRAY /dev/md1 metadata=1.2 name=AS6604T-9BE5:1 UUID=252e792c:fcf8064d:5c2537c7:52719d18
Code: Select all
# cat /proc/mounts | grep /dev/md
/dev/md0 /volume0 ext4 rw,relatime,data=ordered 0 0
/dev/md1 /volume1 ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/home ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Web ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Video ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Docker ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Download ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Misc ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Isos ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /volume1/.@plugins/AppCentral/docker-ce/docker_lib ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
Code: Select all
# blkid | grep swap
/dev/md126: UUID="f6091418-39e4-437f-8478-d4d7eae7bfe7" TYPE="swap"
Code: Select all
# mdadm --detail /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Tue Jul 6 19:08:41 2021
Raid Level : raid1
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Jul 20 16:44:35 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : asnas:126 (local to host asnas)
UUID : e8dda194:e95b4870:7864b217:3cc62540
Events : 20
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 3 1 active sync /dev/sda3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
Code: Select all
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Jul 6 19:08:36 2021
Raid Level : raid1
Array Size : 2095104 (2046.34 MiB 2145.39 MB)
Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Jul 21 11:30:29 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : AS6604T-9BE5:0
UUID : ba74ff1a:9e85a8a3:928042d3:818f91b1
Events : 54
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 2 1 active sync /dev/sda2
2 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2
Code: Select all
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Jul 6 19:29:43 2021
Raid Level : raid6
Array Size : 7804860416 (7443.30 GiB 7992.18 GB)
Used Dev Size : 3902430208 (3721.65 GiB 3996.09 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Jul 21 11:39:49 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : AS6604T-9BE5:1
UUID : 252e792c:fcf8064d:5c2537c7:52719d18
Events : 15663
Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 4 1 active sync /dev/sda4
2 8 36 2 active sync /dev/sdc4
3 8 52 3 active sync /dev/sdd4