Virtualizing GPUs, once reserved for large enterprises with complex infrastructures, is now more accessible thanks to the official support of NVIDIA vGPU on Proxmox VE. Since version 18 of NVIDIA’s vGPU software, Proxmox has become an officially supported platform, allowing multiple virtual machines to share a single high-end physical GPU. This combination opens new opportunities for industries like AI, scientific computing, engineering, 3D rendering, and media production.

The Power of Proxmox VE and NVIDIA vGPU Integration

Proxmox VE, a leading open-source hypervisor, is known for its robustness and flexibility. The integration of NVIDIA vGPU allows organizations to maximize their hardware investments, enabling GPU sharing across multiple workloads without compromising performance.

System Requirements and Setup

To fully benefit from this advanced setup, organizations need:

  • An active Proxmox VE subscription (Basic, Standard, or Premium).
  • A valid NVIDIA vGPU license.
  • Certified enterprise-grade hardware.
  • PCIe passthrough enabled in the BIOS (VT-d or AMD-V, SR-IOV, Above 4G decoding).
  • At least Proxmox VE version 8.3.4 with kernel 6.8 or newer.
  • The pve-nvidia-vgpu-helper tool, included from version 8.3.4 onwards, to streamline setup.

Once prerequisites are in place, administrators can install NVIDIA host drivers, configure SR-IOV for Ampere-based GPUs and newer, and map PCI resources for easier management.


Officially Supported GPUs for NVIDIA vGPU on Proxmox VE

Based on NVIDIA’s official documentation, the following GPUs are currently supported for vGPU use on Proxmox VE:

NVIDIA RTX Professional and Workstation Series

  • RTX A2000
  • RTX A4000
  • RTX A5000
  • RTX A6000
  • RTX 4000 Ada Generation
  • RTX 5000 Ada Generation
  • RTX 6000 Ada Generation

NVIDIA Data Center GPUs (Ampere and beyond)

  • NVIDIA A2
  • NVIDIA A10
  • NVIDIA A16
  • NVIDIA A30
  • NVIDIA A40
  • NVIDIA A100
  • NVIDIA L4
  • NVIDIA L40
  • NVIDIA L40S
  • NVIDIA H100 (SR-IOV and vGPU supported environments)

Turing Architecture GPUs

  • Quadro RTX 4000
  • Quadro RTX 5000
  • Quadro RTX 6000
  • Quadro RTX 8000

Legacy GPUs (Pascal/Volta, supported in earlier vGPU versions)

  • Tesla P4
  • Tesla P40
  • Tesla P100
  • Tesla V100

Note: Some workstation GPUs, like the RTX A5000, require switching to display-less mode using NVIDIA’s Display Mode Selector Tool to enable vGPU functionality.

A complete, up-to-date list is available in the NVIDIA Qualified System Catalog.


Guest VM Configuration and Remote Access

After assigning a vGPU to a VM, administrators must use remote access solutions, as Proxmox’s built-in consoles (VNC/SPICE) cannot display vGPU-based outputs.

For Windows guests, enabling Remote Desktop is the simplest option. On Linux guests, using tools like x11vnc alongside LightDM as the display manager is recommended.


Licensing and Best Practices

To run vGPUs without restrictions, proper licensing via NVIDIA’s Delegated License Service (DLS) is mandatory. Ensure that guest machines have synchronized system clocks using NTP to avoid license validation issues.


Conclusion: A Major Leap in GPU Virtualization

The combination of NVIDIA vGPU and Proxmox VE allows enterprises and data centers to virtualize GPU resources for demanding workloads, reducing hardware costs and increasing flexibility. Whether for AI model training, 3D simulations, or high-end graphical applications, this integration offers scalability, efficiency, and future-proof infrastructure.

For full technical details, visit the official documentation at pve.proxmox.com and NVIDIA’s vGPU documentation pages.

Scroll to Top