GPU virtualization is now very much an on-prem reality thanks to NVIDIA vGPU Software and Proxmox VE. The proposition is familiar: one physical GPU can be split into multiple vGPUs for different virtual machines (VMs) without giving up 3D acceleration, CUDA compute, or video encoding. What follows is a practical—and up-to-date—guide to what you need (hardware, licensing, versions), how to install and configure the solution end-to-end, and what to verify so you don’t lose time on common pitfalls.


Supported platform—under clear conditions

Since NVIDIA vGPU Software 18, Proxmox VE is an officially supported platform. To open a support ticket, two prerequisites must be met in the cluster:

  • A valid NVIDIA vGPU entitlement.
  • An active Proxmox VE subscription (Basic, Standard, or Premium).

As for hardware, the golden rule is simple: check the card in NVIDIA’s Qualified System Catalog and verify server and version compatibility. On some workstation models (e.g., RTX A5000), vGPU capability exists but is not enabled by default; in these cases, you must switch the display mode with the NVIDIA Display Mode Selector, which disables the GPU’s video outputs.


Versions and stack alignment

Proxmox VE publishes tested combinations of pve-manager, kernel, vGPU branch, and NVIDIA host driver. Recent references include:

  • PVE 8.4.1 with kernels 6.8/6.11/6.14 and vGPU 18.3 (host driver 570.158.02).
  • PVE 9.0.x with kernel 6.14.x and vGPU branches 18.4 and 19.2 (drivers 570.172.07 and 580.95.02).

With kernels ≥ 6.8 (GRID ≥ 17.3), you must have qemu-server ≥ 8.2.6 on the host because the driver’s low-level interface changed. NVIDIA also maintains a driver-to-vGPU mapping table you should consult before downloading installers.

Since vGPU 16.0, some cards are no longer supported by vGPU and move to NVIDIA AI Enterprise. The technical procedure is similar, but these are different products with different licenses, and NVIDIA AI Enterprise is not officially supported with Proxmox VE at this time.


Host prerequisites: BIOS/UEFI, repos, and helper

1) Enable PCIe Passthrough and firmware options

In BIOS/UEFI, enable:

  • Intel VT-d or AMD-V / IOMMU.
  • SR-IOV (mandatory on Ampere and newer GPUs).
  • Above 4G decoding.
  • ARI (Alternative Routing ID Interpretation; not needed on pre-Ampere).

In the kernel, ensure IOMMU is enabled so PCIe groups are properly isolated.

2) Repositories and updates

Keep Proxmox VE current (enterprise or no-subscription repo as appropriate). In production, the enterprise repo is recommended.

3) pve-nvidia-vgpu-helper

Since pve-manager ≥ 8.3.4, Proxmox ships a helper that saves initial steps:

apt install pve-nvidia-vgpu-helper
pve-nvidia-vgpu-helper setup

The helper adds headers, installs DKMS, and blacklists nouveau. If nouveau was loaded, reboot the host after this step.

If you later install an opt-in kernel, remember to install the matching proxmox-headers so DKMS can rebuild modules.


Installing the vGPU host driver

  1. Download the host driver from NVIDIA (choose Linux KVM as hypervisor).
  2. Copy the .run onto the node and execute with DKMS:
chmod +x NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run
./NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run --dkms
  1. Answer yes to registering sources with DKMS.
  2. Reboot the host after a successful install.

Secure Boot: if enabled, modules must be signed; see the Secure Boot section below.


Enabling SR-IOV (Ampere and newer)

On Ampere-class GPUs and later, you must enable SR-IOV so the card exposes virtual functions (VFs). You can use NVIDIA’s script or, more conveniently, the included Proxmox service:

systemctl enable --now pve-nvidia-sriov@ALL.service
Code language: CSS (css)

Replace ALL with a specific PCI ID (e.g., 0000:01:00.0) to target a single GPU. Verify VFs exist with:

lspci -d 10de:

You should see the physical device (01:00.0) and several VFs (01:00.4, 01:00.5, …).


Resource Mapping for GPUs (optional, recommended)

In Datacenter → Resource mappings, create a mapping that includes all VFs on the GPU and tick Use with mediated devices. When a VM boots, Proxmox will auto-assign the first free VF from the group. This simplifies operations and separates privileges.


Create the VM and prepare remote access

Create the VM without a vGPU using your usual template (wizard or qm). Important: the web console (noVNC/SPICE) won’t display the vGPU framebuffer; you must configure in-guest remote access:

  • Windows 10/11: enable Remote Desktop (RDP) under Settings → System → Remote Desktop.
  • Linux: use VNC; for example, LightDM + x11vnc (on Rocky Linux you may need EPEL and to open 5900/tcp in firewalld).

Add the vGPU to the VM

With the VM powered off, assign one VF and the appropriate mediated device type.

CLI (example):

qm set VMID -hostpci0 01:00.4,mdev=nvidia-660
Code language: JavaScript (javascript)

Web UI: Hardware → Add → PCI Device, select the VF and mdev type.
To list types available on your node:

pvesh get /nodes/NODE-NAME/hardware/pci/MAPPING-NAME/mdev
Code language: JavaScript (javascript)

The list depends on the driver and kernel versions.


Guest drivers (Windows and Linux)

Windows 10/11

Download the GRID guest driver that matches your host driver (check NVIDIA’s matrix). Run the installer (…_grid_win10_win11_….exe) and reboot. From here on, use RDP; the web console won’t render the vGPU output.

Ubuntu / Rocky Linux

Install the GRID package for your distribution:

# Ubuntu (.deb)
apt install ./nvidia-linux-grid-550_550.127.05_amd64.deb

# Rocky (.rpm)
dnf install nvidia-linux-grid-550-550.127.05-1.x86_64.rpm
Code language: PHP (php)

Generate the X.org config:

nvidia-xconfig

Reboot and connect via VNC. If you will use CUDA, install the CUDA Toolkit compatible with your driver.


vGPU licensing inside the guest

Each VM using a vGPU must obtain a license. NVIDIA offers several options, including DLS (Delegated License Service). Key recommendations:

  • Ensure the VM’s time is synchronized via NTP; if the clock is wrong, licensing will fail.
  • Verify connectivity between the VM and the license server.

Common issues—and how to avoid them

  • Windows 10/11 – “Fast startup.” With the option enabled, a hybrid shutdown can lead to a BSOD and the vGPU being disabled. Fix: disable Fast startup:
    Control Panel → Power Options → Choose what the power button does → Untick “Turn on fast startup”
    or from an elevated prompt: powercfg -h off
  • QEMU warning (AER) on VM boot: vfio ... Could not enable error recovery for the device Typically seen on consumer hardware lacking full PCIe AER support. It doesn’t affect normal operation; it only means some link errors might not be soft-recoverable.
  • No video in the Proxmox web console after guest driver install. Expected: the vGPU owns the guest adapter and the built-in console won’t display it. Use RDP/VNC.

Secure Boot: signing the NVIDIA module

With Secure Boot enabled, the kernel will only load signed modules:

  1. Install prerequisites:
apt install shim-signed grub-efi-amd64-signed mokutil
  1. Run the NVIDIA installer with DKMS but skip module load:
./NVIDIA-Linux-x86_64-525.105.14-vgpu-kvm.run --dkms --skip-module-load
Code language: JavaScript (javascript)

Answer no when asked to sign the module.
3) Rebuild and sign with DKMS (adjust the version):

dkms status
dkms build  -m nvidia -v 550.144.02 --force
dkms install -m nvidia -v 550.144.02 --force
Code language: CSS (css)
  1. Enroll the DKMS MOK key in UEFI (you need console access to confirm the import at boot).
  2. Verify:
lspci -d 10de: -nnk
# Should show: Kernel driver in use: nvidia
Code language: PHP (php)

Checklist (0-to-vGPU, without surprises)

  • BIOS/UEFI: enable IOMMU, SR-IOV (Ampere+), Above 4G, ARI.
  • Keep Proxmox VE updated; ensure qemu-server ≥ 8.2.6 if kernel ≥ 6.8.
  • Run pve-nvidia-vgpu-helper setup; reboot if it blacklisted nouveau.
  • Install the NVIDIA host driver with --dkms and reboot.
  • Enable SR-IOV (pve-nvidia-sriov@...) and confirm VFs.
  • (Optional) Create a Resource Mapping with all VFs.
  • Prepare RDP/VNC inside the guest.
  • Add the vGPU (VF + correct mdev type).
  • Install guest GRID driver and CUDA if needed.
  • Configure licensing (e.g., DLS) and NTP on the guest.

Frequently asked questions (FAQ)

What licensing and support requirements apply to NVIDIA vGPU on Proxmox VE?
Official support requires an active NVIDIA vGPU entitlement and a valid Proxmox VE subscription (Basic, Standard, or Premium). Each VM using a vGPU needs a guest license (e.g., via DLS). Without licensing, the vGPU may run with limitations.

Is SR-IOV mandatory to use vGPU?
On Ampere and newer GPUs, yes: you must enable SR-IOV to expose virtual functions (VFs). On earlier generations, vGPU can work without SR-IOV. Proxmox ships the pve-nvidia-sriov@... service to automate this at boot.

The Proxmox web console shows no display after installing the guest driver. Is that expected?
Yes. The vGPU owns the guest graphics adapter and the built-in console (noVNC/SPICE) cannot render it. Use Remote Desktop (Windows) or VNC/RDP (Linux) to interact with the VM.

How do I pick the right vGPU (mdev) model for my workload?
List types available on your node:

pvesh get /nodes/NODE/hardware/pci/MAPPING/mdev
Code language: JavaScript (javascript)

Choose the vGPU profile that fits your VM (frame buffer size, number of VMs per GPU, 3D vs compute). Visibility of models depends on your host driver and kernel.


Bottom line. The NVIDIA vGPU + Proxmox VE duo lets you densify a physical GPU—monetizing it—by distributing its performance across multiple VMs. Success hinges on version alignment, automating SR-IOV, correct guest configuration (drivers and remote access), and licensing each vGPU. With that foundation in place, graphics and compute acceleration stops being a per-server luxury and becomes a shared pool under the hypervisor’s control.

Scroll to Top