mount: unknown filesystem type ‘zfs_member’

After mounting a NTFS partiton in read/write, a NFS partition and just last week a LVM2_member partition. Its time for a new episode in the series: How to mount an unknown filetype in linux. As they say, the saga continues.

Note that I use proxmox (debian spin) on this machines, this makes that ZFS was already installed. Installing ZFS is easy, and I done it before on Mint, the official ZFS (on linux) information is also a huge help, so head over there if you wan’t up-to-date install & configuration help.

Every good story always starts when someone or something f*cks up. So here goes, right in the nuts :

root@svennd:~# mount /dev/sdd3 /mnt/datadisk/
mount: unknown filesystem type 'zfs_member'

That’s fine, its time for our hero to move in and gain information about our target: fdisk -l

root@svennd:~# fdisk -l /dev/sdg

Disk /dev/sdd: 74.5 GiB, 80026361856 bytes, 156301488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D4DA51A9-6084-4E52-ACE7-AD0272064251

Device      Start       End   Sectors  Size Type
/dev/sdd1    2048      4095      2048    1M BIOS boot
/dev/sdd2    4096    266239    262144  128M EFI System
/dev/sdd3  266240 156299439 156033200 74.4G Linux filesystem

The “BIOS boot” and “EFI System” partition are irrelevant to me collecting data, but “Linux filesystem” is where the data is. Sadly this does not give us allot to go on. I was hoping this was part of a ZFS mirror. If it where a RAIDZ[1-3] a single disk is not going to cough up its data to easy.  ZFS comes with allot of handy tools, one is actually scanning storage devices for ZFS partitions that aren’t yet active on the device and contrary to other RAID systems, ZFS is always in a state where it can be transferred between machines. So I tried to run zpool import :

root@svennd:~# zpool import
 pool: rpool
    id: 1395200144405345736
 state: DEGRADED
status: One or more devices contains corrupted data.
action: The pool can be imported despite missing or damaged devices.  The
       fault tolerance of the pool may be compromised if imported.

       rpool                    DEGRADED
         mirror-0               DEGRADED
           sdd3                 ONLINE
           4085699320674626661  UNAVAIL

Bam ! It found the partition on the disk! (/dev/sdd3) This also tells me that it is in fact a ZFS mirror setup. Now once we found it, we can simply import it, this is done with zpool import <name_dataset>

root@svennd:~# zpool import rpool
cannot import 'rpool': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name

Woops. Good thing, ZFS helps us out here :

root@svennd:~# zpool import rpool temp_rpool
cannot import 'rpool': pool may be in use from other system, it was last accessed by (none) (hostid: 0xa8c00802) on Tue Mar 31 15:58:04 2015
use '-f' to import anyway

Oke, ZFS likes -f, so lets add it ; zpool import -f poolname new_poolname

root@svennd:~# zpool import -f rpool temp_rpool
cannot mount '/': directory is not empty

The error on there means this was mounted on / (root) before, this obv. is not empty so it does not mount it. To change that first look up what the current mountpoint is, and change that, this can all be done using zfs get and zfs set.

root@svennd:~# zfs get all temp_rpool |grep mountpoint
temp_rpool  mountpoint            /                   default
root@svennd:~# zfs set mountpoint=/mnt/datadisk temp_rpool

Once that’s done you can mount the ZFS pool as using zfs mount.

zfs mount temp_rpool

Another “un-mountable” partition beats the dust. Good luck!

Leave a Reply

Your email address will not be published. Required fields are marked *