Hello fellow Proxmox enjoyers!

I have questions regarding the ZFS disk IO stats and hope you all may be able to help me understand.

Setup (hardware, software)

I have Proxmox VE installed on a ZFS mirror (2x 500 GB M.2 PCIe SSD) rpool . The data (VMs, disks) resides on a seperate ZFS RAID-Z1 (3x 4TB SATA SSD) data_raid.

I use ~2 TB of all that, 1.6 TB being data (movies, videos, music, old data + game setup files, …).

I have 6 VMs, all for my use alone, so there’s not much going on there.

Question 1 - costant disk write going on?

I have a monitoring setup (CheckMK) to monitor my server and VMs. This monitoring reports a constant write IO operation for the disks, ongoing, without any interruption, of 20+ MB/s.

I think the monitoring gets the data from zpool iostat, so I watched it with watch -n 1 'sudo zpool iostat', but the numbers didn’t seem to change.

It has been the exact same operations and bandwidth read / write for the last minute or so (after taking a while for writing this, it now lists 543 read ops instead of 545).

Every 1.0s: sudo zpool iostat

              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data_raid   2.29T  8.61T    545    350  17.2M  21.5M
rpool       4.16G   456G      0     54  8.69K  2.21M
----------  -----  -----  -----  -----  -----  -----

The same happens if I use -lv or -w flags for zpool iostat.

So, are there really constantly 350 write operations going on? Or does it just not update the IO stats all too often?

Question 2 - what about disk longevity?

This isn’t my first homelab-setup, but it is my first own ZFS- and RAID-setup. If somebody has any SSD-RAID or SSD-ZFS experiences to share, I’d like to hear them.

The disks I’m using are:

Best regards from a fellow rabbit-hole-enjoyer.

  • hamsda@feddit.orgOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Thank you very much for your input, I’ll definitely have to go with business drives whenever the current ones die.

    Thankfully, I do have monitoring for SMART data and drive health, so I’ll be warned before something bad happens.