Setting Up Tiered Ceph Storage with CephFS and RBD on Proxmox 9
The Setup I have a 3-node Proxmox 9 cluster, each with a 4TB SSD and 2TB HDD dedicated to Ceph. The NVMe drives stay local for fast VM storage. The question was: how do I use the SSDs for performance-sensitive workloads and the HDDs for bulk storage? 1 2 3 4 Per Node: 2TB NVMe → nvme-local (LVM-thin, not Ceph) 4TB SSD → Ceph OSD (fast tier) 2TB HDD → Ceph OSD (bulk tier) CRUSH Rules: Telling Ceph Where to Put Data Ceph already knows which OSDs are SSDs and which are HDDs — it assigns device classes automatically. But by default, it’ll spread data across all OSDs regardless of type. CRUSH rules let you pin pools to specific device classes. ...