[Solved] Adding NVMe and migrating ADM

Moderator: Lillian.W@AST

Post Reply
ndl101
Posts: 57
youtube meble na wymiar Warszawa
Joined: Sun Jul 11, 2021 4:32 pm

[Solved] Adding NVMe and migrating ADM

Post by ndl101 »

Hi.

When I bought my AS6604T I skipped buying any NVMes as I figured I could always add them later on and migrate the content. Due to, what appears to be well know issues with constant disk activity, I decided to do this now rather than wait. As I currently has no external backup of my data, I would like to do this without any data loss. I do not mind doing things manually via CLI but I am a little in the dark when it comes to determining a decent approach for this. On any other Linux based system, normally I would just clone and manipulate partitions and update fstab as needed but I have not manually handled RAID arrays before. I have searched the forums but have not yet found a detailed description of how to go about this. I have 4 disks configured as a RAID 6 array, details are below.

I am hoping it is as simple as the following (which is a little fuzzy on the details) but I am expecting not:
  • boot alternative OS from USB (Ubuntu 21.04 seems to support all hardware including the NICs/ethernet on the AS6604T)
  • manually create a RAID 1 of the added NVMes
  • create a partition a on each NVMe matching a partition under /dev/md126 (swap)
  • create a partition a on each NVMe matching a partition under /dev/md0 (/volume0) to each NVMe
  • enable and mount the new volume0 replacement and copy needed data from /dev/md0 to it
  • update configurations files as needed
  • manually delete unneeded partitions under old swap and old /volume0
  • for each disk
    • manually grow data partion to fill entire disk
    • let mdam rebuild/resync the RAID array
  • reboot into ADM
Looking at this I tend to think it might just be easier to:
  • temporaryly remove the existing disks
  • add the NVMes
  • boot the NAS and run through the initial setup
  • boot live USB
  • add old /volume1 as /volume2 to config
  • update configuration files as needed regarding apps, shares etc.
  • reboot
Worth mentioning, I have not been able to locate a mdadm.conf.
I would like to hear your thoughts on this and any suggestions would be appreciated.

# Related information

Code: Select all

# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid6 sdb4[0] sdd4[3] sdc4[2] sda4[1]
      7804860416 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md126 : active raid1 sdb3[0] sdd3[3] sdc3[2] sda3[1]
      2095104 blocks super 1.2 [4/4] [UUUU]
      
md0 : active raid1 sdb2[0] sdd2[3] sdc2[2] sda2[1]
      2095104 blocks super 1.2 [4/4] [UUUU]
      
unused devices: <none>

Code: Select all

# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=AS6604T-9BE5:0 UUID=ba74ff1a:9e85a8a3:928042d3:818f91b1
ARRAY /dev/md126 metadata=1.2 name=asnas:126 UUID=e8dda194:e95b4870:7864b217:3cc62540
ARRAY /dev/md1 metadata=1.2 name=AS6604T-9BE5:1 UUID=252e792c:fcf8064d:5c2537c7:52719d18

Code: Select all

# cat /proc/mounts | grep /dev/md
/dev/md0 /volume0 ext4 rw,relatime,data=ordered 0 0
/dev/md1 /volume1 ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/home ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Web ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Video ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Docker ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Download ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Misc ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /share/Isos ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0
/dev/md1 /volume1/.@plugins/AppCentral/docker-ce/docker_lib ext4 rw,relatime,stripe=32,data=ordered,jqfmt=vfsv1,usrjquota=aquota.user,grpjquota=aquota.group 0 0

Code: Select all

# blkid | grep swap
/dev/md126: UUID="f6091418-39e4-437f-8478-d4d7eae7bfe7" TYPE="swap"

Code: Select all

# mdadm --detail /dev/md126 
/dev/md126:
        Version : 1.2
  Creation Time : Tue Jul  6 19:08:41 2021
     Raid Level : raid1
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
  Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Jul 20 16:44:35 2021
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           Name : asnas:126  (local to host asnas)
           UUID : e8dda194:e95b4870:7864b217:3cc62540
         Events : 20

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8        3        1      active sync   /dev/sda3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3

Code: Select all

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jul  6 19:08:36 2021
     Raid Level : raid1
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
  Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Jul 21 11:30:29 2021
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           Name : AS6604T-9BE5:0
           UUID : ba74ff1a:9e85a8a3:928042d3:818f91b1
         Events : 54

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8        2        1      active sync   /dev/sda2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2

Code: Select all

# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jul  6 19:29:43 2021
     Raid Level : raid6
     Array Size : 7804860416 (7443.30 GiB 7992.18 GB)
  Used Dev Size : 3902430208 (3721.65 GiB 3996.09 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Jul 21 11:39:49 2021
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : AS6604T-9BE5:1
           UUID : 252e792c:fcf8064d:5c2537c7:52719d18
         Events : 15663

    Number   Major   Minor   RaidDevice State
       0       8       20        0      active sync   /dev/sdb4
       1       8        4        1      active sync   /dev/sda4
       2       8       36        2      active sync   /dev/sdc4
       3       8       52        3      active sync   /dev/sdd4
Last edited by ndl101 on Sun Aug 01, 2021 4:22 am, edited 1 time in total.

I made it long as I lacked the time to make it short.

---
Help to self-help:
How to ask (good) questions in a forum
---
General information
Location: Denmark
OS: Ubuntu 20.04
NAS: Lockerstor 4 (AS6604T)
User avatar
orion
Posts: 3482
Joined: Wed May 29, 2013 11:09 am

Re: Adding NVMe and migrating ADM

Post by orion »

I would recommend you to backup your data externally. Then re-initialize NAS, create volume 1 using NVMe devices first.

The reason is that volume 1 is special inside ADM. If you try to migrate your RAID-6 volume to volume 2 and to create NVMe volume 1, you'll have to deal with ADM's internal files.
User avatar
Nazar78
Posts: 2002
Joined: Wed Jul 17, 2019 10:21 pm
Location: Singapore
Contact:

Re: Adding NVMe and migrating ADM

Post by Nazar78 »

Best approach would be as mentioned by orion. Asustor is not using mdadm.conf instead I believe it'll try to load the arrays by checking the IDs/Names against the disks and partitions order. The volumes are then recorded in /volume0/usr/etc/volume.conf. All these are done by a custom daemon written in C before the boot loader switched to the raid 1 array rootfs.

But as I'm not using the nvme version of the nas, I couldn't tell if nvme could be treated as the priority that Asustor will check to assemble the arrays. If it could then the steps that you mentioned would work. You can try initialize the nas just with the nvme without the disks. Shut it down put back the raid 6 disks then start it back up to see which are the ones loaded as md0 which is the system in raid 1. If the raid 6 volume is not mounted, you can try adding it in the volume.conf then restart (see my example below). Please backup your raid 6 important data if possible before you proceed.

A little of my history, I have my bays in raid 10 from day 1 so there's no chance of changing to ssds and my nas doesn't offer nvmes. To solve my disks not sleeping, I attached a 5 bay enclosure, one of the slots I have a ssd then the rest 4 hdds in raid 5. Moved and mounted all the essential apps to the external ssd. I also configured bitmaps of the 3 arrays md0/1/2 on the ssd. Here's my volume.conf, volume1 is the internal raid 10 and volume2 is the external raid 5. You should be able to figure out some of the entries, I posted here somewhere before, sorry I'm on mobile:

Code: Select all

[volume1]
Level = 2
Raid = 4
Total = 4
Option = 0
Ftype = ext4
UUID = 09f8f718:23e2229e:f8d39308:3b333e58
Index = 0,1,2,3
Cachemode = 0
CLevel = 0
CState = -1
CDirty = 0
CUUID = 
Cnumber = 0
CIndex = 
Cseqcut = No
CsizeMB = 0

[volume2]
Level = 5
Raid = 4
Total = 4
Option = 0
Ftype = ext4
UUID = 6f68fa8c:0c565dd0:946f2472:137e1934
Index = 5,6,7,8
Cachemode = 0
CLevel = 0
CState = -1
CDirty = 0
CUUID = 
Cnumber = 0
CIndex = 
Cseqcut = No
CsizeMB = 0
AS5304T - 16GB DDR4 - ADM-OS modded on 2GB RAM
Internal:
- 4x10TB Toshiba RAID10 Ext4-Journal=Off
External 5 Bay USB3:
- 4x2TB Seagate modded RAID0 Btrfs-Compression
- 480GB Intel SSD for modded dm-cache (initramfs auto update patch) and Apps

When posting, consider checking the box "Notify me when a reply is posted" to get faster response
ndl101
Posts: 57
Joined: Sun Jul 11, 2021 4:32 pm

Re: Adding NVMe and migrating ADM

Post by ndl101 »

So, here is what I did:
  • detached the existing 4 disks in RAID 6 configuration
  • added the 2 x nvmes
  • went through the initial NAS setup again with very little configuration apart from enabling SSH
  • via SSH I manually enabled and mounted both the old volume0 and the old volume1 from the RAID 6 setup
  • copied the old volume1 definition into the /volume0/usr/etc/volume.conf and renamed it into volume2

    Code: Select all

    [volume2]
    Level = 6
    Raid = 4
    Total = 4
    Option = 0
    Ftype = ext4
    UUID = 252e792c:fcf8064d:5c2537c7:52719d18
    Index = 0,1,2,3
    Cachemode = 0
    CLevel = 0
    CState = -1
    CDirty = 0
    CUUID = 
    Cnumber = 0
    CIndex = 
    Cseqcut = No
    CsizeMB = 0
    
    Bonus info: the UUID in the volume definition is the same as the UUID from the RAID array wich is also to be found via blkid (although in a proper UUID format:

    Code: Select all

    $ mdadm --examine /dev/sda4
        /dev/sda4:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 252e792c:fcf8064d:5c2537c7:52719d18
               Name : AS6604T-9BE5:1
      Creation Time : Tue Jul  6 19:29:43 2021
         Raid Level : raid6
       Raid Devices : 4
    
     Avail Dev Size : 7804860416 (3721.65 GiB 3996.09 GB)
         Array Size : 7804860416 (7443.30 GiB 7992.18 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
       Unused Space : before=262056 sectors, after=0 sectors
              State : clean
        Device UUID : 8cbc812f:fc2c18fc:67f93b9f:bdd08e41
    
        Update Time : Mon Jul 26 19:34:11 2021
      Bad Block Log : 512 entries available at offset 72 sectors
           Checksum : c11ef15 - correct
             Events : 15671
    
             Layout : left-symmetric
         Chunk Size : 64K
    
       Device Role : Active device 1
       Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
    

    Code: Select all

    ...
    /dev/sda4: UUID="252e792c-fcf8-064d-5c25-37c752719d18" UUID_SUB="8cbc812f-fc2c-18fc-67f9-3b9fbdd08e41" LABEL="AS6604T-9BE5:1" TYPE="linux_raid_member"
    ...
    
  • At this point the storage manager did display both the devices and volume2 but was complaining about volume2 so I rebooted expecting that it would be mounted properly after reboot which it was.
Now I just need to
  • fix some shares and symlinks
  • delete the old ADM related partitions
  • grow the data partitions to include the freed space
Honestly, considering the roughness, restrictiveness and confinement of this platform, I am partial to just installing Openmediavault or some other vanilla Linux distro it....

Regards.

I made it long as I lacked the time to make it short.

---
Help to self-help:
How to ask (good) questions in a forum
---
General information
Location: Denmark
OS: Ubuntu 20.04
NAS: Lockerstor 4 (AS6604T)
User avatar
Nazar78
Posts: 2002
Joined: Wed Jul 17, 2019 10:21 pm
Location: Singapore
Contact:

Re: Adding NVMe and migrating ADM

Post by Nazar78 »

Congrats! Was quick of you since you have the background.

Code: Select all

Now I just need to
fix some shares and symlinks
delete the old ADM related partitions
grow the data partitions to include the freed space
Simple enough for you :lol:
Honestly, considering the roughness, restrictiveness and confinement of this platform, I am partial to just installing Openmediavault or some other vanilla Linux distro it....
I had the thoughts too. Have tried debian, ubuntu and even windows 10 running natively on the NAS but while they are not restrictive as you've mentioned, I wasn't able to get few proprietary modules to work i.e. the fan and leds control. So I just stick to ADM and use the LXC for regular usage.
AS5304T - 16GB DDR4 - ADM-OS modded on 2GB RAM
Internal:
- 4x10TB Toshiba RAID10 Ext4-Journal=Off
External 5 Bay USB3:
- 4x2TB Seagate modded RAID0 Btrfs-Compression
- 480GB Intel SSD for modded dm-cache (initramfs auto update patch) and Apps

When posting, consider checking the box "Notify me when a reply is posted" to get faster response
ndl101
Posts: 57
Joined: Sun Jul 11, 2021 4:32 pm

Re: Adding NVMe and migrating ADM

Post by ndl101 »

Just a final update on this. I actually went ahead with removing the unused partitions and an attempted to grow the RAID array but ADM wouldn't have it (as in would not recognize the new layout). So I wiped the disks and created a single partition layout which ADM also would not recognize. At this point I let ADM create a initialize a new RAID array. Apparantly, ADM, no matter what, creates the same layout for all use cases:

Code: Select all

admin@asnas:/volume1/home/admin $ sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.4

Partition table scan:
  MBR: hybrid
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with hybrid MBR; using GPT.
Disk /dev/sda: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 77E1F676-629F-4A47-B2AC-F0D8A8C0B413
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3693 sectors (1.8 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          524287   255.0 MiB   8300  
   2          524288         4718591   2.0 GiB     FD00  
   3         4718592         8912895   2.0 GiB     FD00  
   4         8912896      7814035455   3.6 TiB     FD00  

Code: Select all

admin@asnas:/volume1/home/admin $ sudo gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 0.8.4

Partition table scan:
  MBR: hybrid
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with hybrid MBR; using GPT.
Disk /dev/nvme0n1: 488397168 sectors, 232.9 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 46D2D35B-36B7-4400-80EA-1005E396FC14
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 488397134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2349 sectors (1.1 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          524287   255.0 MiB   8300  
   2          524288         4718591   2.0 GiB     FD00  
   3         4718592         8912895   2.0 GiB     FD00  
   4         8912896       488396799   228.6 GiB   FD00  
(Hint: I am baffled by this choice of design/implementation)
So, I am to where I was before starting to manipulate the RAID array and it turns out that wanting to remove the unused partitions and grow the array was an exercise in futility - I could just have left the disk as they were before adding the NVMes. If it was not for the RAID related things I learned while doing it, it would have been a waste of time. Mistakes were made, lessons were learned.

I made it long as I lacked the time to make it short.

---
Help to self-help:
How to ask (good) questions in a forum
---
General information
Location: Denmark
OS: Ubuntu 20.04
NAS: Lockerstor 4 (AS6604T)
blueblood
Posts: 2
Joined: Tue Jan 17, 2023 3:43 pm

Re: Adding NVMe and migrating ADM

Post by blueblood »

Nazar78 wrote:Best approach would be as mentioned by orion. Asustor is not using mdadm.conf instead I believe it'll try to load the arrays by checking the IDs/Names against the disks and partitions order. The volumes are then recorded in /volume0/usr/etc/volume.conf. All these are done by a custom daemon written in C before the boot loader switched to the raid 1 array rootfs.

But as I'm not using the nvme version of the nas, I couldn't tell if nvme could be treated as the priority that Asustor will check to assemble the arrays. If it could then the steps that you mentioned would work. You can try initialize the nas just with the nvme without the disks. Shut it down put back the raid 6 disks then start it back up to see which are the ones loaded as md0 which is the system in raid 1. If the raid 6 volume is not mounted, you can try adding it in the volume.conf then restart (see my example below). Please backup your raid 6 important data if possible before you proceed.

A little of my history, I have my bays in raid 10 from day 1 so there's no chance of changing to ssds and my nas doesn't offer nvmes. To solve my disks not sleeping, I attached a 5 bay enclosure, one of the slots I have a ssd then the rest 4 hdds in raid 5. Moved and mounted all the essential apps to the external ssd. I also configured bitmaps of the 3 arrays md0/1/2 on the ssd. Here's my volume.conf, volume1 is the internal raid 10 and volume2 is the external raid 5. You should be able to figure out some of the entries, I posted here somewhere before, sorry I'm on mobile:

Code: Select all

[volume1]
Level = 2
Raid = 4
Total = 4
Option = 0
Ftype = ext4
UUID = 09f8f718:23e2229e:f8d39308:3b333e58
Index = 0,1,2,3
Cachemode = 0
CLevel = 0
CState = -1
CDirty = 0
CUUID = 
Cnumber = 0
CIndex = 
Cseqcut = No
CsizeMB = 0

[volume2]
Level = 5
Raid = 4
Total = 4
Option = 0
Ftype = ext4
UUID = 6f68fa8c:0c565dd0:946f2472:137e1934
Index = 5,6,7,8
Cachemode = 0
CLevel = 0
CState = -1
CDirty = 0
CUUID = 
Cnumber = 0
CIndex = 
Cseqcut = No
CsizeMB = 0
I am facing similar issue with my as5304t - the constant disk activity drives me crazy. I have raid1 on volume1 but I also have a ssd as volume2 and I would like to do what you did. Would you tell me please how did you move those apps to ssd? For docker apps, it is easy - I moved whatever I could to volume2. But it is still not enough. What about the other apps?
User avatar
Nazar78
Posts: 2002
Joined: Wed Jul 17, 2019 10:21 pm
Location: Singapore
Contact:

Re: Adding NVMe and migrating ADM

Post by Nazar78 »

blueblood wrote:I am facing similar issue with my as5304t - the constant disk activity drives me crazy. I have raid1 on volume1 but I also have a ssd as volume2 and I would like to do what you did. Would you tell me please how did you move those apps to ssd? For docker apps, it is easy - I moved whatever I could to volume2. But it is still not enough. What about the other apps?
You should be aware that not only the apps are keeping the disk awaked. The OS activities like writing to logs, mapped shares being accessed, will also wake the disks because it's on the same RAID1 disks only on a different partition. And if I'm right, your single SSD also contains a mirror of the OS. Run `blkid` and `cat /proc/mdstat` then share it here to confirm as I don't have spare bays to test.

And to briefly answer your question, I moved my apps to the external USB SSD, then wrote a script to bind mount from the external USB SSD to the original location at boot. But if your internal SSD also contains the OS, then by doing this method you can only stop the apps but not the OS from waking the disk.

My setup is a little different and complicated because I don't have spare bays thus I use external SSD on a USB caddy to hold both the OS and apps.
AS5304T - 16GB DDR4 - ADM-OS modded on 2GB RAM
Internal:
- 4x10TB Toshiba RAID10 Ext4-Journal=Off
External 5 Bay USB3:
- 4x2TB Seagate modded RAID0 Btrfs-Compression
- 480GB Intel SSD for modded dm-cache (initramfs auto update patch) and Apps

When posting, consider checking the box "Notify me when a reply is posted" to get faster response
blueblood
Posts: 2
Joined: Tue Jan 17, 2023 3:43 pm

Re: Adding NVMe and migrating ADM

Post by blueblood »

Nazar78 wrote:
blueblood wrote:I am facing similar issue with my as5304t - the constant disk activity drives me crazy. I have raid1 on volume1 but I also have a ssd as volume2 and I would like to do what you did. Would you tell me please how did you move those apps to ssd? For docker apps, it is easy - I moved whatever I could to volume2. But it is still not enough. What about the other apps?
You should be aware that not only the apps are keeping the disk awaked. The OS activities like writing to logs, mapped shares being accessed, will also wake the disks because it's on the same RAID1 disks only on a different partition. And if I'm right, your single SSD also contains a mirror of the OS. Run `blkid` and `cat /proc/mdstat` then share it here to confirm as I don't have spare bays to test.

And to briefly answer your question, I moved my apps to the external USB SSD, then wrote a script to bind mount from the external USB SSD to the original location at boot. But if your internal SSD also contains the OS, then by doing this method you can only stop the apps but not the OS from waking the disk.

My setup is a little different and complicated because I don't have spare bays thus I use external SSD on a USB caddy to hold both the OS and apps.
Thanks for replying.
Here is blkid:

/dev/loop0: UUID="c9d864bd-c4a2-4b79-940d-86c91ce50705" BLOCK_SIZE="1024" TYPE="ext4"
/dev/sda1: PARTUUID="0ec88c97-99c7-4add-8c12-7444ef737b67"
/dev/sda2: UUID="98cff4cb-de58-260c-e3a5-84c651a86efe" UUID_SUB="d2822428-b7f5-3a4f-d033-001292fa70a8" LABEL="AS5304T-E4A9:0" TYPE="linux_raid_member" PARTUUID="a633abae-1085-4fbb-bf80-b059c433c609"
/dev/sda3: UUID="1b8ed9cb-479c-e983-efdb-ddc4823c70c1" UUID_SUB="059d96e6-3160-b47e-35b1-fc192f437166" LABEL="NAS:126" TYPE="linux_raid_member" PARTUUID="646dc7c4-fc7c-4776-aacc-2e6df99480bd"
/dev/sda4: UUID="d62e70af-6dab-bd6f-6d7f-ca987dfbfb0a" UUID_SUB="937d3034-3c9d-8543-8cc4-6645dc9581d2" LABEL="AS5304T-E4A9:1" TYPE="linux_raid_member" PARTUUID="2fd5b9fa-cb25-4049-90a2-4a90335ddd3c"
/dev/sdb1: PARTUUID="9186848e-0a33-4e64-bb55-5985d3abb349"
/dev/sdb2: UUID="98cff4cb-de58-260c-e3a5-84c651a86efe" UUID_SUB="2cc90611-240f-4955-2cca-fd4f78a6c759" LABEL="AS5304T-E4A9:0" TYPE="linux_raid_member" PARTUUID="bb1cda84-6b7c-459a-880c-65ec1f95b271"
/dev/sdb3: UUID="1b8ed9cb-479c-e983-efdb-ddc4823c70c1" UUID_SUB="f5b5d9a5-634c-62ae-8cb6-2fe0b4dda7e4" LABEL="NAS:126" TYPE="linux_raid_member" PARTUUID="92cc26a3-e289-4cb9-9715-2b222967599d"
/dev/sdb4: UUID="d62e70af-6dab-bd6f-6d7f-ca987dfbfb0a" UUID_SUB="2809b005-e499-e92b-b52e-2e2be9c0a85a" LABEL="AS5304T-E4A9:1" TYPE="linux_raid_member" PARTUUID="b2f422b9-005f-4598-b73a-3124dfb591d6"
/dev/sdc1: LABEL="SSD250" UUID="B297-B425" BLOCK_SIZE="512" TYPE="exfat" PARTUUID="1edd11a4-fa2a-44fe-ab12-f82554d54f51"
/dev/sdc2: UUID="98cff4cb-de58-260c-e3a5-84c651a86efe" UUID_SUB="2da026b7-b258-b77a-c4dc-d6eb4b46fbc2" LABEL="AS5304T-E4A9:0" TYPE="linux_raid_member" PARTUUID="34ccadfa-ed6b-4225-aa7b-f3eb8198656a"
/dev/sdc3: UUID="1b8ed9cb-479c-e983-efdb-ddc4823c70c1" UUID_SUB="1a52e563-35f7-297f-282d-57a51568f6cd" LABEL="NAS:126" TYPE="linux_raid_member" PARTUUID="e776e314-e0da-42c7-81af-54193e779b5f"
/dev/sdc4: UUID="ab740345-3749-b70a-2d74-43f6e489d820" UUID_SUB="91f3aa6a-f828-7687-8da4-3d96cf0aff69" LABEL="NAS:2" TYPE="linux_raid_member" PARTUUID="a35f98a1-591d-470d-bd12-1e580c8745b4"
/dev/md0: UUID="7c548bb8-d4b9-4380-ba20-a24e10f4e93f" BLOCK_SIZE="4096" TYPE="ext4"
/dev/md126: UUID="bca66c19-6928-4f72-b07f-326aef93a3b1" TYPE="swap"
/dev/md1: UUID="7d92833d-809e-46e2-9e16-3ee602f96e57" BLOCK_SIZE="4096" TYPE="ext4"
/dev/md2: UUID="b1c9b67b-bb9d-4626-96ae-706c5df67d7b" BLOCK_SIZE="4096" TYPE="ext4"


here is mdstat:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdc4[0]
229841920 blocks super 1.2 [1/1]

md1 : active raid1 sda4[2] sdb4[1]
5855933440 blocks super 1.2 [2/2] [UU]

md126 : active raid1 sda3[5] sdc3[4] sdb3[8]
2094080 blocks super 1.2 [4/3] [UUU_]

md0 : active raid1 sda2[4] sdc2[5] sdb2[8]
2094080 blocks super 1.2 [4/3] [UUU_]

unused devices: <none>
User avatar
Nazar78
Posts: 2002
Joined: Wed Jul 17, 2019 10:21 pm
Location: Singapore
Contact:

Re: [Solved] Adding NVMe and migrating ADM

Post by Nazar78 »

Yup my assumption is correct, your single SSD does holds both the OS and SWAP:

Code: Select all

md0 : active raid1 sda2[4] sdc2[5] sdb2[8]
2094080 blocks super 1.2 [4/3] [UUU_]
This md0 is your OS volume0. The sdc2 is the OS partition on your SSD, part of the RAID1 mirror.

Code: Select all

md126 : active raid1 sda3[5] sdc3[4] sdb3[8]
2094080 blocks super 1.2 [4/3] [UUU_]
This md126 is your SWAP. The sdc3 is the SWAP partition on your SSD, also part of the RAID1 mirror.

So after you move the apps from volume1 to the USB using the bind method I mentioned previously, you can put in another script to auto fail the SSD mirrors at boot:

Code: Select all

mdadm /dev/md0 --fail /dev/sdc2 && mdadm /dev/md0 --remove /dev/sdc2

Code: Select all

mdadm /dev/md126 --fail /dev/sdc3 && mdadm /dev/md126 --remove /dev/sdc3
These will rebuild upon reboot. I just recommend to fail and remove the SSD array but not to destroy them as it may induce unexpected results via ADM.

As this is not official, ensure you have backups and know what you're doing as we're hereby not responsible for any warranty issues.
AS5304T - 16GB DDR4 - ADM-OS modded on 2GB RAM
Internal:
- 4x10TB Toshiba RAID10 Ext4-Journal=Off
External 5 Bay USB3:
- 4x2TB Seagate modded RAID0 Btrfs-Compression
- 480GB Intel SSD for modded dm-cache (initramfs auto update patch) and Apps

When posting, consider checking the box "Notify me when a reply is posted" to get faster response
Post Reply

Return to “ADM general”