The Problem

My two boys each had their own gaming PC. Two towers, two sets of peripherals, two machines to maintain, update, and troubleshoot. When I started consolidating everything into a Proxmox homelab, it seemed wasteful to keep two standalone gaming rigs around.

The plan: one host, two GPUs, two Windows VMs. Each boy gets a dedicated GPU, their own monitor, keyboard, and mouse. They can game simultaneously without knowing they’re on virtual machines.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
  BEFORE (two standalone PCs)
  ───────────────────────────
  ┌──────────────┐         ┌──────────────┐
  │  Gaming PC #1│         │  Gaming PC #2│
  │              │         │              │
  │  RTX 3080   │         │  RTX 3080    │
  │  Win 11     │         │  Win 11      │
  │  Monitor    │         │  Monitor     │
  │  KB+Mouse   │         │  KB+Mouse    │
  └──────────────┘         └──────────────┘
   2 towers · 2x maintenance
   2x power · 2x updates
   kids control the power button


  AFTER (single Proxmox host)
  ───────────────────────────
  ┌──────────────────────────────────────────────┐
  │          pve008 (Proxmox Host)               │
  │     AMD Ryzen 9 5900X · 64GB RAM             │
  │          SSH-only (headless)                  │
  ├──────────────────────────────────────────────┤
  │  IOMMU Group 26       IOMMU Group 27         │
  │  ┌──────────────┐     ┌──────────────┐       │
  │  │  RTX 3080 #1 │     │  RTX 3080 #2 │       │
  │  │  vfio-pci     │     │  vfio-pci     │      │
  │  └──────┬───────┘     └──────┬───────┘       │
  │         │                    │               │
  │  ┌──────▼───────┐     ┌──────▼───────┐       │
  │  │  VM 701      │     │  VM 702      │       │
  │  │  Windows 11  │     │  Windows 11  │       │
  │  │  8 cores     │     │  8 cores     │       │
  │  │  32GB RAM    │     │  32GB RAM    │       │
  │  │  750GB disk  │     │  750GB disk  │       │
  │  └──────┬───────┘     └──────┬───────┘       │
  └─────────┼────────────────────┼───────────────┘
            │                    │
  ┌─────────▼────────┐  ┌───────▼──────────┐
  │  Monitor #1      │  │  Monitor #2      │
  │  Keyboard #1     │  │  Keyboard #2     │
  │  Mouse #1        │  │  Mouse #2        │
  └──────────────────┘  └──────────────────┘
        SEB's Desk            RTB's Desk

  1 host · SSH managed
  snapshots · backups
  dad controls the power button

The LXC Detour

I started where most container enthusiasts start: trying to do it in LXC. Lighter than VMs, better resource sharing, GPU passthrough “should work.”

Six hours later, I had an Ubuntu LXC container where nvidia-smi detected the RTX 3080 Ti perfectly, but X server refused to initialize. The NVIDIA kernel module works through a different code path in containers than on bare metal. Compute workloads (CUDA, encoding) work fine in LXC. Graphics don’t.

The takeaway was clear: VMs are the proven path for GPU gaming on Proxmox.

Configuring the Host

The first VM attempts failed silently. Lock files, timeouts, no error messages. After chasing ghosts for two hours, I found the problem: the Proxmox host wasn’t actually configured for GPU passthrough.

IOMMU was enabled and VFIO modules were loaded, but the host still had claim to the GPU through NVIDIA drivers. Three things need to be in place before a GPU can pass through to a VM:

1. Kernel parameters for IOMMU and framebuffer disabling:

1
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt video=vesafb:off video=efifb:off"

2. Driver blacklist so the host doesn’t grab the GPU:

1
2
3
4
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm

3. VFIO device binding to reserve the GPU for VM passthrough (this was the initial config for the single RTX 3080 Ti on the loaner machine):

1
options vfio-pci ids=10de:2208,10de:1aef disable_vga=1

After update-initramfs -u -k all and a reboot, lspci -nnk showed Kernel driver in use: vfio-pci. The GPU was ready.

The First Gaming VM

With the host configured, building the VM went smoothly using the Proxmox GUI: Q35 machine, OVMF UEFI, TPM 2.0, VirtIO SCSI storage, and the RTX 3080 Ti as a PCI passthrough device.

One key decision: don’t set the GPU as “Primary GPU” during Windows installation. Keep the virtual display active so you can watch the installer through the Proxmox console. Switch to GPU-only after drivers are installed.

Windows 11 needed VirtIO drivers to see the disk (load the vioscsi driver during install) and another for networking (NetKVM). Both come from the VirtIO drivers ISO. USB passthrough for keyboard and mouse worked immediately using the vendor/device ID method.

Going Dual-GPU

The second GPU (RTX 3080) went into the same host. Both cards landed in separate IOMMU groups, which is the key requirement for independent passthrough.

The only real hiccup was a display routing confusion. After binding the first GPU to VFIO, the host appeared to “not boot” when a monitor was connected to it. The system was running fine, just outputting video through the wrong card. SSH confirmed it was up the whole time.

Updated VFIO config to include both GPU PCI IDs:

1
options vfio-pci ids=10de:2208,10de:2206,10de:1aef disable_vga=1

Regenerated initramfs, rebooted. Two GPUs bound to vfio-pci, two VMs each with a dedicated card.

Migration to the Permanent Host

The initial setup was on a loaner machine. Moving to the permanent host (AMD Ryzen 9 5900X, 64GB RAM, dual RTX 3080s) meant offline-migrating 750GB VMs across the cluster.

The migration process for VMs with passthrough devices:

  1. Remove GPU and USB passthrough from the VM config
  2. Set VGA back to std
  3. Offline migrate with target storage specified
  4. Re-add passthrough devices using the new host’s PCI addresses
  5. Set VGA back to none

The 5900X has no integrated GPU, so with both discrete cards passed to VMs, the host runs headless. SSH-only management from here on.

TPM state didn’t survive the migration cleanly. Both VMs prompted for BitLocker recovery keys (retrieved from Microsoft accounts), and one needed its TPM logical volume recreated. Worth knowing before you start.

The Final Setup

Each boy’s VM runs with 8 CPU cores, 32GB RAM, a dedicated RTX 3080, 750GB of storage, and USB-passthrough peripherals. Both VMs start simultaneously from the Proxmox interface.

The boys don’t know they’re on VMs. Performance is native-level since each GPU is exclusively assigned. Fortnite, Roblox, and Minecraft all run without issues.

What I’d Do Differently

Skip the LXC experiment. Every GPU gaming success story on Proxmox uses VMs. I should have researched this before spending six hours on it.

Configure the host first. The silent failures from trying to start VMs before binding GPUs to VFIO wasted two hours. Verify lspci -nnk shows vfio-pci before creating any VM.

Pre-install VirtIO drivers. For P2V migrations, installing VirtIO guest tools on the physical machines before cloning made the transition seamless. The cloned VMs booted without driver issues.

Keep a spare display output. With all GPUs bound to VFIO, you lose console access. Plan for SSH-only management or keep an iGPU-equipped CPU if you want local console.

Why It Works

Instead of maintaining two physical gaming PCs:

  • One host handles both VMs with dedicated GPUs
  • Backups and snapshots are trivial through Proxmox
  • Resource allocation (RAM, CPU cores) is adjustable without opening a case
  • The whole thing integrates with the rest of the homelab infrastructure
  • Management happens from any browser on the network

The consolidation pays for itself in reduced maintenance alone.