Hello fellow Proxmox enjoyers!

I have questions regarding the ZFS disk IO stats and hope you all may be able to help me understand.

Setup (hardware, software)

I have Proxmox VE installed on a ZFS mirror (2x 500 GB M.2 PCIe SSD) rpool . The data (VMs, disks) resides on a seperate ZFS RAID-Z1 (3x 4TB SATA SSD) data_raid.

I use ~2 TB of all that, 1.6 TB being data (movies, videos, music, old data + game setup files, …).

I have 6 VMs, all for my use alone, so there’s not much going on there.

Question 1 - costant disk write going on?

I have a monitoring setup (CheckMK) to monitor my server and VMs. This monitoring reports a constant write IO operation for the disks, ongoing, without any interruption, of 20+ MB/s.

I think the monitoring gets the data from zpool iostat, so I watched it with watch -n 1 'sudo zpool iostat', but the numbers didn’t seem to change.

It has been the exact same operations and bandwidth read / write for the last minute or so (after taking a while for writing this, it now lists 543 read ops instead of 545).

Every 1.0s: sudo zpool iostat

              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data_raid   2.29T  8.61T    545    350  17.2M  21.5M
rpool       4.16G   456G      0     54  8.69K  2.21M
----------  -----  -----  -----  -----  -----  -----

The same happens if I use -lv or -w flags for zpool iostat.

So, are there really constantly 350 write operations going on? Or does it just not update the IO stats all too often?

Question 2 - what about disk longevity?

This isn’t my first homelab-setup, but it is my first own ZFS- and RAID-setup. If somebody has any SSD-RAID or SSD-ZFS experiences to share, I’d like to hear them.

The disks I’m using are:

Best regards from a fellow rabbit-hole-enjoyer.

  • hamsda@feddit.orgOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Thank you very much for your input. I’ll definitely have to go for the business models whenever the current ones die.

    I knew I would make some mistake and learn something new, with this being my first real server-PC (instead of mini-pc or raspberry pi) and RAID. I just wished it wasn’t that pricey of a mistake :(

    • mlfh@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I wouldn’t say it’s a big mistake, you’ve likely still got a few years left on your current drives as-is. And you can replace them with same- or larger-capacity drives one at a time to spread the cost out.

      Keep an eye out for retired enterprise ssds on ebay or the like - I got lucky and found mine there for $20 each, with 5 years of uptime but basically nothing written to them so no wearout at all - probably just sat in a server with static data for a full refresh cycle. They’ve been great.

      • hamsda@feddit.orgOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        Sadly, it seems I cannot replace the disks one-by-one. At least not if I don’t upgrade the SSD size to greater than 4TB at the same time.

        The consumer 4TB SSDs yield 3,64 TiB, whereas the datacenter 4TB SSDs seem to yield 3,49 TiB. As far as I know, one cannot replace a zfs raid z1 drive with a smaller one. I’ll have to watch the current consumer SSDs closely and be prepared for when I’ll have to switch them.

        I’m not all too sure about buying used IT / stuff in general from ebay, but I’ll have a look, thanks!

        • dbtng@eviltoast.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          If you want enterprise gear on the cheap, yes. Ebay.
          There are regular vendors on Ebay with thousands of verified sales. Go with those till you figure it all out.
          You can definitely make bad choices, but even when I’ve gotten bad drives, the vendor just immediately refunded the money, like that day.