Image

Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to the Power Users community on Codidact!

Power Users is a Q&A site for questions about the usage of computer software and hardware. We are still a small site and would like to grow, so please consider joining our community. We are looking forward to your questions and answers; they are the building blocks of a repository of knowledge we are building together.

Problems accessing disks from a Synology NAS from a Linux PC

+2
−0

My Synology DS220+ died and I decided that the successor will be a DIY system. Now I'm trying to access the data from the disks that were used in the NAS.

Every tutorial, blog post, forum post I find about that topic boils down to this KB article from Synology, which doesn't work for me.

I tried this at first with Ubuntu 24.04, then, after finding a comment that the mdadm tools on Ubuntu 20.04 and newer are "too new" I tried it with 18.04, and even with 16.04 after someone proclaimed that 18.04 is too new as well. I get the same result in all cases.

Here is the result from my Ubuntu 18.04 VM:

root@ubuntu:~# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                        11:0    1 1024M  0 rom  
vda                       252:0    0   15G  0 disk 
├─vda1                    252:1    0    1M  0 part 
├─vda2                    252:2    0    1G  0 part /boot
└─vda3                    252:3    0   14G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   14G  0 lvm  /
vdb                       252:16   0  1.7T  0 disk 
root@ubuntu:~# fdisk -l /dev/vdb
Disk /dev/vdb: 1.7 TiB, 1801763774464 bytes, 3519069872 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start        End    Sectors Size Id Type
/dev/vdb1           1 4294967295 4294967295   2T ee GPT

Note: vdb is the disk in question, Ubuntu has its own logical volume. This is a 4TB WD disk, no idea why it is shown as a 1.7TB with a 2TB partition. This is consistent through all my tries.

root@ubuntu:~# mdadm -AsfRv
mdadm: looking for devices for further assembly
mdadm: cannot open device /dev/sr0: No medium found
mdadm: no recogniseable superblock on /dev/dm-0
mdadm: Cannot assemble mbr metadata on /dev/vdb
mdadm: no recogniseable superblock on /dev/vda3
mdadm: no recogniseable superblock on /dev/vda2
mdadm: no recogniseable superblock on /dev/vda1
mdadm: Cannot assemble mbr metadata on /dev/vda
mdadm: No arrays found in config file or automatically
root@ubuntu:~# vgchange -ay
  1 logical volume(s) in volume group "ubuntu-vg" now active
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>
root@ubuntu:~# lvs
  LV        VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- <14.00g            

The next step would be to mount either the logical volume, or the mdraid device, but I have neither, so I have to stop here.

I had two disk in the DS, both configured as a single volume, no RAID. One of the disks was holding important stuff, of which I have a backup. No loss here. The second disk was holding rather unimportant stuff, where the loss is rather a nuisance, hence no backup. I'd still like to rescue some stuff from it if possible.

I also tried to access the disk by installing Xpenology in a virtual machine, and adding the disk to it. It didn't recognize the disk at all though. I don't know how much of this issue is caused by the virtualization layer, I may try some of this again when the last of the hardware of the new NAS build finally arrives, until then VMs are my only means of trying this.

I don't know if it is relevant, but for the record, the last version running on my DiskStation was 7.3.1, it didn't come back after installing 7.3.1-86003 Update 1.

History

0 comment threads

1 answer

+2
−0

As it turns out my problem was two-fold. One problem was my USB SATA adapter, after I got hands on a computer where I could attach the disks directly they showed up with the proper sizes and I got directly to the point where I could actually try to mount the disks.

root@ubuntu:~$ lsblk
NAME                            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                               8:0    0   5.5T  0 disk
├─sda1                            8:1    0   2.4G  0 part
├─sda2                            8:2    0     2G  0 part
└─sda5                            8:5    0   5.5T  0 part
  └─md2                           9:2    0   5.5T  0 raid1
    ├─vg1-syno_vg_reserved_area 253:2    0    12M  0 lvm
    └─vg1-volume_1              253:3    0   5.5T  0 lvm
sdb                               8:16   0   2.7T  0 disk
├─sdb1                            8:17   0   2.4G  0 part
├─sdb2                            8:18   0     2G  0 part
└─sdb5                            8:21   0   2.7T  0 part
  └─md3                           9:3    0   2.7T  0 raid1
    ├─vg2-syno_vg_reserved_area 253:0    0    12M  0 lvm
    └─vg2-volume_2              253:1    0   2.7T  0 lvm
nvme0n1                         259:0    0 465.8G  0 disk
├─nvme0n1p1                     259:1    0     1M  0 part
└─nvme0n1p2                     259:2    0 465.8G  0 part  /

What also did not work at first was mounting the disks, I got errors from btrfs. But after some searching I found this post on Reddit, which solved this issue for me. It's not only the old Ubuntu that is necessary, you also need to install an older kernel than the one shipped latest with it, because a btrfs patch was included in later kernel builds that caused the mounting issue. I installed the package linux-image-4.15.0-108-generic, edited /etc/default/grub to set GRUB_TIMEOUT=5 and commented out GRUB_TIMEOUT_STYLE=hidden so I could select the correct kernel at boot and after that I could mount the Synology disks without problems.

So, to sum up the process to access the Synology disks from a PC:

  • Attach the disks to a computer directly (not via USB adapter)
    According to other posts on the mentioned Reddit thread this also works on a VM when you pass the disks through directly. I haven't tested this
  • Install Ubuntu 18.04
  • Follow the Synology KB article with the addition of installing and booting into the correct kernel
sudo -i
nano /etc/default/grub
apt-get update
apt-get install -y mdadm lvm2 btrfs-progs linux-image-4.15.0-108-generic
reboot # select the correct kernel
mdadm -AsfR && vgchange -ay
cat /proc/mdstat
lvs 

Mount the disks as needed.

History

1 comment thread

Wow. "Install an older kernel" was not on my bingo card. I'm glad you figured that out, and sorry w... (2 comments)

Sign up to answer this question »