Replacing failing Volume 1, lost other Volumes

dandymanz
Posts: 13
youtube meble na wymiar Warszawa
Joined: Tue Jun 20, 2017 12:16 pm

Re: Replacing failing Volume 1, lost other Volumes

Post by dandymanz »

Ok, thank you. I've tried, and this is what i get from the 1st command.

Code: Select all

root@AS6604T:/volume1/.@root # mdadm -Asf && vgchange -ay
mdadm: /dev/md/AS6404T:3 has been started with 3 drives.
mdadm: /dev/md/126 assembled from 0 drives and 3 spares - not enough to start the array.
mdadm: /dev/md/126 assembled from 0 drives and 3 spares - not enough to start the array.
-sh: vgchange: not found
And here's the 2nd one.

Code: Select all

root@AS6604T:/volume1/.@root # mdadm -D /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
        Version : 1.2
  Creation Time : Wed Nov 16 14:27:57 2016
     Raid Level : raid1
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
  Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
   Raid Devices : 4
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan  9 10:44:31 2021
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : AS3202T-5E76:0
           UUID : 3e8104c0:54e063b4:f4757714:6c800e0f
         Events : 170114

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       2       0        0        2      removed
       4       0        0        4      removed
       5       8        2        3      active sync   /dev/sda2
/dev/md1:
        Version : 1.2
  Creation Time : Sat Apr 27 19:01:40 2019
     Raid Level : raid1
     Array Size : 3902430208 (3721.65 GiB 3996.09 GB)
  Used Dev Size : 3902430208 (3721.65 GiB 3996.09 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan  9 10:44:43 2021
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : AS6404T:1
           UUID : 0ef55ac8:3307b894:075c4095:18d6f907
         Events : 517

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
/dev/md126:
        Version : 1.2
  Creation Time : Sun Dec 13 21:52:56 2020
     Raid Level : raid1
     Array Size : 2095104 (2046.34 MiB 2145.39 MB)
  Used Dev Size : 2095104 (2046.34 MiB 2145.39 MB)
   Raid Devices : 4
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan  9 10:39:40 2021
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : AS6604T:126  (local to host AS6604T)
           UUID : 2f86d579:9e9501f2:d4de2ee3:1c1661c5
         Events : 28

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       2       0        0        2      removed
       4       0        0        4      removed
       6       0        0        6      removed
/dev/md127:
        Version : 1.2
  Creation Time : Fri Jan  4 22:38:31 2019
     Raid Level : raid5
     Array Size : 27335587840 (26069.25 GiB 27991.64 GB)
  Used Dev Size : 13667793920 (13034.62 GiB 13995.82 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Jan  8 21:39:33 2021
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : AS6404T:3
           UUID : 2a9cb37b:e1741084:3c8f21f6:14ad06cf
         Events : 19788

    Number   Major   Minor   RaidDevice State
       0       8       36        0      active sync   /dev/sdc4
       3       8       52        1      active sync   /dev/sdd4
       2       8       20        2      active sync   /dev/sdb4
root@AS6604T:/volume1/.@root #
It's quite interesting for me to see that the Asustor still kept records of the NAS models i upgraded over the years.
User avatar
Nazar78
Posts: 2085
Joined: Wed Jul 17, 2019 10:21 pm
Location: Singapore
Contact:

Re: Replacing failing Volume 1, lost other Volumes

Post by Nazar78 »

Need not include the vgchange command btw it's not supported on Asustor. Also the records are kept on the disks. And yes your raid 5 is intact.

You have two options:

1. A clean start. Mount the raid 5 temporarily, move out important data to an external disk. You can then reinitialize the 3 disks as necessary. To do this create a temp share on volume1 from the ADM portal name it i.e. raid5. Then mount the raid 5 onto it, from the ssh run mount /dev/md127 /share/raid5. You should be able to see your raid 5 contents in /share/raid5 even from Windows.

2. Try this unofficial method that I discovered can't promise it'll work. Also your OS won't be mirrored to the rest of the disks which you can figure it out later. Shutdown the NAS, shift the disks in bay 4 to 2, 2 to 3 and 3 to 4. From left to right basically move the last disk to bay 2 then shift the rest. Backup this file, from ssh run cp -a /volume0/usr/etc/volume.conf /volume0/usr/etc/volume.conf.bak. Then edit its content, run vi /volume0/usr/etc/volume.conf. Add volume2 to the list. If it already exist share with us the contents first or you can make increments like volume3. Navigate using the keyboard cursor. Press i to insert then paste below (which I derived from your inputs). Then save and quit, press esc followed by :wq then enter. Google how to use vim or you can sftp into the NAS, use winscp + an editor that supports linux line feed like notepad++.

Code: Select all

[volume2]
Level = 5
Raid = 3
Total = 3
Option = 0
Ftype = ext4
UUID = 2a9cb37b:e1741084:3c8f21f6:14ad06cf
Index = 1,2,3
Cachemode = 0
CLevel = 0
CState = -1
CDirty = 0
CUUID = 
Cnumber = 0
CIndex = 
Cseqcut = No
CsizeMB = 0
Run mdadm -Asf then try relogin into the ADM storage manager see if volume2 is there and you can use, else reboot. Pardon any mistakes on my post as I'm replying from my mobile.
AS5304T - 16GB DDR4 - ADM-OS modded on 2GB RAM
Internal:
- 4x10TB Toshiba RAID10 Ext4-Journal=Off
External 5 Bay USB3:
- 4x2TB Seagate modded RAID0 Btrfs-Compression
- 480GB Intel SSD for modded dm-cache (initramfs auto update patch) and Apps

When posting, consider checking the box "Notify me when a reply is posted" to get faster response
dandymanz
Posts: 13
Joined: Tue Jun 20, 2017 12:16 pm

Re: Replacing failing Volume 1, lost other Volumes

Post by dandymanz »

HI Nazar, thanks for your time and patience in helping me on my issue.

Currently, i have just tried to view the volume.conf file to see what's in it. My Raid 5 is a Volume 3. And i can see that this information is there. So i'm not sure if creating a new Volume 2 or 4 would help. I have not tried swapping the HDD bays yet.

Code: Select all

[volume3]
Level = 5
Raid = 3
Total = 3
Option = 0
Ftype = ext4
UUID = 2a9cb37b:e1741084:3c8f21f6:14ad06cf
Index = 2,3,1
Cachemode = 0
CLevel = 0
CState = -1
CDirty = 0
CUUID =
Cnumber = 0
CIndex =
Cseqcut = No
CsizeMB = 0
I've been trying the commands and something just looks weird to me.

Code: Select all

root@AS6604T:/ # mdadm --assemble --scan
mdadm: /dev/md/AS6404T:3 has been started with 3 drives.
mdadm: Found some drive for an array that is already active: /dev/md126
mdadm: giving up.
mdadm: Found some drive for an array that is already active: /dev/md126
mdadm: giving up.
root@AS6604T:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid5 sdb4[0] sdc4[2] sda4[3]
      27335587840 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md1 : active raid1 nvme0n1p4[0]
      972174336 blocks super 1.2 [1/1] [U]

md126 : active raid1 nvme0n1p3[6]
      2095104 blocks super 1.2 [2/1] [U_]

md0 : active raid1 nvme0n1p2[8]
      2095104 blocks super 1.2 [2/1] [U_]

unused devices: <none>
Should "/dev/md/AS6404T:3" be "/dev/md/127" ?
Anyway, i did a "mdadm --assemble --force /dev/md127 /dev/sdb4 /dev/sdc4 /dev/sda4" , and then managed to "mount /dev/md127 /share/RAID5" as you said. But my Raid 5 contents can only be seen when i'm still in Putty. The Windows Share is empty. Maybe it needs a "refresh" to be triggered because RAID5 still thinks it's a folder on Volume 1. And since i have manually assembled the disk correctly, i was expecting Volume 3 to work in ADM but it isn't. I wonder how to trigger ADM to "refresh" itself without a reboot.
User avatar
Nazar78
Posts: 2085
Joined: Wed Jul 17, 2019 10:21 pm
Location: Singapore
Contact:

Re: Replacing failing Volume 1, lost other Volumes

Post by Nazar78 »

Ignore the name it's the data read from the superblock. Just use the name you see in mdstat, md127.

You can try toggle the CIFS service but first check if the account has permission to read. Just temporarily create the share as public.

I'm not sure about ADM as I can't test it now but I think the disks order needs to be exact as mentioned it doesn't auto scan. You can try test using the volume2 sample I've given since it's available after swapping. Doing all these the raid 5 is still safe but ensure you stop the array first or just shutdown before swapping to avoid rebuild.
AS5304T - 16GB DDR4 - ADM-OS modded on 2GB RAM
Internal:
- 4x10TB Toshiba RAID10 Ext4-Journal=Off
External 5 Bay USB3:
- 4x2TB Seagate modded RAID0 Btrfs-Compression
- 480GB Intel SSD for modded dm-cache (initramfs auto update patch) and Apps

When posting, consider checking the box "Notify me when a reply is posted" to get faster response
dandymanz
Posts: 13
Joined: Tue Jun 20, 2017 12:16 pm

Re: Replacing failing Volume 1, lost other Volumes

Post by dandymanz »

So i went through the ticket with Asustor support today. And received a confirmation again that my current RAID5 data is still intact. Tech support teamviewer in with me to go through the setup.
Mainly assembled the array again and mounted it into a Volume. Same steps that Nazar advised (thank you once again).
However, shutting down and booting up again was a No go.
Tech support advice is to find another storage and backup all my information out. Which looks like the likely scenario i will have to follow.
But still, i use this mainly for data storage, so after i move everything out, i might as well re-initialize the whole NAS again.
Am still not sure whether looking into the ADM partitions is possible to figure out what was causing the issue.
Post Reply

Return to “[Official] For AS52xx/53xx/66xx Series”