Michael Abrahamsen

May 01, 2017

ZFS Scrubbing

A simple way to check the data integrity of a ZFS pool is by scrubbing the data. This process goes through all of the data and ensures it can be read.

To start a scrub you can run the command:

$ zpool scrub <pool>

If you would like to stop a scrub that is currently in progress (if you are doing some other heavy i/o work for instance) run the following command:

$ zpool scrub -s <pool>

Schedule scrub once per week

Scrubbing should happen at least once a week. Here are a couple ways to easily schedule a scrub.

Set up a cronjob to run a scrub once a week:

$ crontab -e
------------
...
30 19 * * 5 zpool scrub <pool>
...

Alternativly use systemd-zpool-scrub from the AUR if you are on Arch Linux Once installed it will create a service file that will run a scrub weekly, all you need to do is enable the service.

$ systemctl enable zpool-scrub@<pool>.timer  
$ systemctl start zpool-scrub@<pool>.timer 
posted at 08:33  ·   ·  zfs

Mar 10, 2017

Setting up ZFS with Arch root install

Configuration overview

The last setup of this machine was an UNRAID install with virtual machines using PCI passthrough. For this setup I am going to run Arch Linux with a root install on ZFS. The root install will allow snapshots of the entire operating system.

Array hardware

  • Boot Drive        - 32 GB USB flash drive
  • Array Storage - 3x 3TB HDD
  • Cache                 - 2x 128GB SSD

Final configuration

The completed setup will have the following zpool configuration and mount points

$ zpool status

  pool: vault
 state: ONLINE
  scan: none requested
config:

    NAME                                             STATE     READ WRITE CKSUM
    vault                                            ONLINE       0     0     0
      mirror-0                                       ONLINE       0     0     0
        ata-ST3000DM001-1CH144_Z1F5W372              ONLINE       0     0     0
        ata-ST3000DM001-1CH144_Z1F5YJ5C              ONLINE       0     0     0
        ata-ST3000DM001-1C6144_Z1F5KYV4              ONLINE       0     0     0
    cache
      ata-Samsung_SSD_850_EVO_120GB_S21TNSAG205110A  ONLINE       0     0     0
      ata-Samsung_SSD_850_EVO_120GB_S21WNX0H404232B  ONLINE       0     0     0

errors: No known data errors
$ zfs mount

vault/ROOT/default              /
vault/home                      /home
vault                           /vault

Create EFI partition on boot drive

Boot into archlive in UEFI mode and create a 512MB EFI partition on the USB drive

# enter into fdisk for the required device
$ fdisk /dev/sdX
$ g # create a new GPT partition table
$ n # create a new partition
$ enter # select partition 1 as default
$ enter # select default start location
$ +512M # end location of 512M after start location

$ t # change the partition type to EFI
$ 0 # in fdisk the EFI system partition is 0
$ w # write changes to disk

Create and configure the pool

You do not need to partition the disks that will be used in the pool as zfs will partition the drives automatically when the pool is created. Use disk id to specify the disks in the pool. You can find disk ids with the command ls /dev/disk/by-id. This install is going to use a 3 way mirror with the SSDs acting as the cache.

Load the kernel module

$ modprobe zfs

Note the -o ashift=12 will use 4096 byte sectors which is what you want for modern disks.

$ zpool create -f -o ashift=12 vault mirror \
ata-ST3000DM001-1CH144_Z1F5W372 \
ata-ST3000DM001-1CH144_Z1F5YJ5C \
ata-ST3000DM001-1C6144_Z1F5KYV4 \
cache \
ata-Samsung_SSD_850_EVO_120GB_S21TNSAG205110A \
ata-Samsung_SSD_850_EVO_120GB_S21WNX0H404232B

Check the status of the pool to make sure it was created correctly.

$ zpool status

  pool: vault
 state: ONLINE
  scan: none requested
config:

    NAME                                             STATE     READ WRITE CKSUM
    vault                                            ONLINE       0     0     0
      mirror-0                                       ONLINE       0     0     0
        ata-ST3000DM001-1CH144_Z1F5W372              ONLINE       0     0     0
        ata-ST3000DM001-1CH144_Z1F5YJ5C              ONLINE       0     0     0
        ata-ST3000DM001-1C6144_Z1F5KYV4              ONLINE       0     0     0
    cache
      ata-Samsung_SSD_850_EVO_120GB_S21TNSAG205110A  ONLINE       0     0     0
      ata-Samsung_SSD_850_EVO_120GB_S21WNX0H404232B  ONLINE       0     0     0

errors: No known data errors

Turn on compression and optimize writes to disk with relatime

#turn on compression
$ zfs set compression=on vault
# optimize writes to disk
$ zfs set relatime=on vault

Create datasets for / and home

$ zfs create -o mountpoint=/ vault/ROOT/default
$ zfs create -o mountpoint=/home vault/home

Specify the dataset that will be used to boot from

$ zpool set bootfs=vault/ROOT/default vault

Unmount zfs volumes and export before installing arch

$ zfs umount -a
$ zpool export vault

Install the base Arch system

Create directories for mountpoints

$ mkdir /mnt/{boot,home}

Mount zfs and boot volumes

$ mount /dev/sdX1 /mnt/boot # boot disk
$ zpool import -d /dev/disk/by-id -R /mnt vault
$ pacstrap -i /mnt base base-devel

Copy the zpool.cache file to the arch install

$ cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache

Generate the fstab and make sure /boot is mounted

$ genfstab -U -p /mnt >> /mnt/etc/fstab

Add hooks to mkinitcpio.conf and regenerate it

$ vim /mnt/etc/mkinitcpio.conf

# Add zfs after keyboard but before filesystems
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"


# regenerate mkinitcpio
$ mkinitcpio -p linux

Chroot into the Arch install and configure the system

$ arch-chroot /mnt /bin/bash

Add the archzfs repo to /etc/pacman.conf

$ vim /etc/pacman.conf

# add the following in the repository section
[archzfs]
Server = http://archzfs.com/$repo/x86_64

Sign the repository key with the key from AUR

$ pacman-key -r 5E1ABF240EE7A126
$ pacman-key --lsign-key 5E1ABF240EE7A126

Update the system and install zfs

$ pacman -Syyu
$ pacman -S zfs-linux

Enable zfs services

$ systemctl enable zfs.target
$ systemctl enable zfs-import-cache.service

Install the EFI bootloader

bootctl --path=/boot install

Create an entry for Arch in the bootloader

$ vim /boot/loader/entries/arch.conf

# add the following
title     Arch Linux
linux     /vmlinuz-linux
initrd    /initramfs-linux.img
options   zfs=vault/ROOT/default rw

Exit chroot unmount drives and restart

# exit chroot before unmounting
$ umount /mnt/boot
$ zfs umount -a
$ zpool export vault
$ reboot

Apparently when the computer is booting the host id is not available to the system. To fix this, create a hostid file and regenerate mkinitcpio. After rebooting:

$ hostid > /etc/hostid
$ mkinitcpio -p linux
posted at 16:00  ·   ·  linux  arch  zfs

Mar 08, 2017

Creating Arch Linux iso with ZFS installed with EFI system

Adding ZFS to the iso can save you some time when you are experimenting with the setup as you will not have to add the repository and install each time you restart the machine this way.

Download archiso

# switch to root
$ sudo -i or su root

# Install archiso
$ pacman -S archiso

# Create directory to hold our build and copy necessary files
$ mkdir ~/archlive
$ cp -r /usr/share/archiso/configs/releng/* ~/archlive

Add archzfs server to pacman.conf

Edit ~/archlive/pacman.conf and add the following code:

[archzfs]
SigLevel = Optional TrustAll
Server = http://archzfs.com/$repo/x86_64

Add archzfs-linux to packages.x86_64

$ echo 'archzfs-linux' >> ~/archlive/packages.x86_64

Build the image

# create a temporary directory for the build
$ cp -r ~/archlive /tmp 
$ cd /tmp/archlive

# Create /tmp/archlive/out and run the build script
$ mkdir out
$ ./build.sh -v

Create a bootable usb device with the new image

Uselsblk to find the usb in my case /dev/sdc use /dev/sdX to fit your needs. Then run the following command to create the bootable usb:

$ dd bs=4M if=/tmp/archlive/out/archlinux-2017.03.05-dual.iso of=/dev/sdX
status=progress && sync
posted at 22:30  ·   ·  linux  arch  zfs