KVM/QEMU

Reflections on Proxmox VE

I’ve now been using Proxmox VE as a hypervisor in my home lab for a couple of years, and as I’ve reverted to plain Ubuntu Server + KVM, I figured I would try to summarize my thoughts on the product.

Proxmox VE can be described as a low-cost and open-source alternative to VMware vSphere with aspects of vSAN and NSX. The prospect is excellent, and the system scales beautifully all the way from a single (home) lab server with a single traditionally formatted hard drive up to entire clusters with distributed object storage via Ceph; all in a pretty much turnkey solution. If I was involved in setting up an on-prem IT environment for a small- to medium-sized business today, Proxmox VE would definitely be on my shortlist.

So if it’s so good, what made me go back to a regular server distribution?

Proxmox VE, like all complete solutions, works best when you understand the developers’ design paradigm and follow it – at least roughly. It is theoretically based on a Debian core, but the additional layers of abstraction want to take over certain functionality and it’s simply best to let them. Trying to apply a configuration that somehow competes with Proxmox VE will introduce some occasional papercuts to your life: containers that fail to come back up after a restart now and then, ZFS pools that occasionally don’t mount properly, etc. Note that I’m sure I caused these problems on my own by various customizations, so I’m not throwing any shade on the product per se, but the fact remains that I wanted to manage my specific physical hosts in ways that differed from how Proxmox VE would like me to manage them, and that combination made the environment less than optimal.

As these servers are only used and managed by me and I do perfectly fine in a command line interface or using scripts and playbooks, I’ve come to the conclusion that I prefer a minimalist approach and so I’m back to running simple Ubuntu servers with ZFS storage pools for virtual machines and backups, and plain KVM for my hypervisor layer. After the initial setup – a weekend project I will write up for another post – I have the best kind of server environment at home: One I more or less never have to touch unless I want to.

IPv6 guests in KVM

I’ve been experimenting with IPv6 at home, and spent some time trying to get it working in my virtual machines.

The first symptom I got was that VMs got a “Network unreachable” error when trying to ping6 anything but their own address. The cause was a complete brainfart on my side: We need a¬†loopback interface network definition for IPv6 in /etc/network/interfaces:

auto lo
iface lo inet loopback
iface lo inet6 loopback

The second problem took a bit more digging to understand: I would get an IPv6 address, and I could ping stuff both on my own network and on the Internet from the VM, but no other computers could reach the virtual machine over IPv6.

According to this discussion, QEMU/KVM has support for multicast (required for proper IPv6 functioning), but it’s turned off by default. Remedy this by running virsh edit [vm-name] and adding trustGuestRxFilters='yes' to the appropriate network interface definition:

    
      
      
      
      

As usual, when you understand the problem the solution is simple.