Proxmox VE

Reflections on Proxmox VE

I’ve now been using Proxmox VE as a hypervisor in my home lab for a couple of years, and as I’ve reverted to plain Ubuntu Server + KVM, I figured I would try to summarize my thoughts on the product.

Proxmox VE can be described as a low-cost and open-source alternative to VMware vSphere with aspects of vSAN and NSX. The prospect is excellent, and the system scales beautifully all the way from a single (home) lab server with a single traditionally formatted hard drive up to entire clusters with distributed object storage via Ceph; all in a pretty much turnkey solution. If I was involved in setting up an on-prem IT environment for a small- to medium-sized business today, Proxmox VE would definitely be on my shortlist.

So if it’s so good, what made me go back to a regular server distribution?

Proxmox VE, like all complete solutions, works best when you understand the developers’ design paradigm and follow it – at least roughly. It is theoretically based on a Debian core, but the additional layers of abstraction want to take over certain functionality and it’s simply best to let them. Trying to apply a configuration that somehow competes with Proxmox VE will introduce some occasional papercuts to your life: containers that fail to come back up after a restart now and then, ZFS pools that occasionally don’t mount properly, etc. Note that I’m sure I caused these problems on my own by various customizations, so I’m not throwing any shade on the product per se, but the fact remains that I wanted to manage my specific physical hosts in ways that differed from how Proxmox VE would like me to manage them, and that combination made the environment less than optimal.

As these servers are only used and managed by me and I do perfectly fine in a command line interface or using scripts and playbooks, I’ve come to the conclusion that I prefer a minimalist approach and so I’m back to running simple Ubuntu servers with ZFS storage pools for virtual machines and backups, and plain KVM for my hypervisor layer. After the initial setup – a weekend project I will write up for another post – I have the best kind of server environment at home: One I more or less never have to touch unless I want to.

ZFS backups in Proxmox – Part 2

A while ago I wrote about trying out pve-zsync for backing up some Proxmox VE entities. I kept using regular Proxmox backups for the other machines, though: It is a robust way to get recoverable machine backups but it’s not very elegant. For example all backups are full: There’s no logic for managing incremental or differential backups. The last straw was a bug in the Proxmox web interface where these native full backups kept landing on my SSD-backed disk pool which is stupid for two reasons: a) it gave me no on-site protection from disk failures which after user error is the most likely reason to need a backup, and b) it used up valuable space on my most expensive pool. Needless to say, I scrapped that backup solution (and pve-zsync) completely.

My new solution is based entirely on Jim Salter’s excellent tools sanoid and syncoid. Sanoid now gives me hourly ZFS snapshots of all of my virtual machines and containers and of my base system, with timely purging of old snapshots. On my production server, syncoid makes sure these snapshots are cloned to my backup pool, and on my off-site server, syncoid fetches snapshots from the backup pool on the production server to its own backup pool. This means I have a better, cleaner, faster and most importantly working backup solution with considerably less clutter than before: A config file for sanoid and a few cron jobs to trigger syncoid in the right way.