Retiring pve005: Decommissioning a Proxmox Node the Hard Way

Why Now pve005 was an i5-7500 with 16GB of RAM. It ran the original Jellyfin LXC with 1.1TB of media on a local ZFS pool. Once I rebuilt the lab around a 3-node Ryzen 9 cluster with Ceph storage, pve005 became dead weight. The media was migrated to jellyfin01 on the new cluster months ago. The old LXC was stopped. pve005 was still drawing power, still in the Ceph quorum, and still showing up in every Ansible run. ...

April 8, 2026 · 4 min · Adam Behn

Rebuilding My Homelab: From 9 Nodes to 3 with Proxmox 9 and Ceph

The Problem My homelab had grown organically into a 9-node Proxmox 8 cluster on a flat 10.150.10.0/24 network. Six i5/i7 machines (pve001-006) with 16GB RAM each, plus three Ryzen 9 5900X machines (pve007-009) with 64-128GB RAM and dedicated GPUs. The old nodes were underpowered, the network was a mess, and managing it all was getting painful. It was time to consolidate. The Plan Rebuild the three Ryzen machines as a proper 3-node cluster with: ...

March 7, 2026 · 6 min · Adam Behn

Setting Up Tiered Ceph Storage with CephFS and RBD on Proxmox 9

The Setup I have a 3-node Proxmox 9 cluster, each with a 4TB SSD and 2TB HDD dedicated to Ceph. The NVMe drives stay local for fast VM storage. The question was: how do I use the SSDs for performance-sensitive workloads and the HDDs for bulk storage? 1 2 3 4 Per Node: 2TB NVMe → nvme-local (LVM-thin, not Ceph) 4TB SSD → Ceph OSD (fast tier) 2TB HDD → Ceph OSD (bulk tier) CRUSH Rules: Telling Ceph Where to Put Data Ceph already knows which OSDs are SSDs and which are HDDs — it assigns device classes automatically. But by default, it’ll spread data across all OSDs regardless of type. CRUSH rules let you pin pools to specific device classes. ...

March 7, 2026 · 5 min · Adam Behn