A detailed benchmark reveals that Proxmox outperforms VMware ESXi in almost every metric, with nearly 50% more IOPS and 30% lower latency.
A comprehensive study conducted by Blockbridge Technologies compares the storage performance of Proxmox VE 7.2 and VMware ESXi 7.0 Update 3c using NVMe/TCP and reveals a clear winner. Under identical hardware configurations and with 32 virtual machines running concurrently, Proxmox consistently outperformed VMware in throughput, latency, and IOPS.
Methodology and Setup
The tests were conducted on a Dell PowerEdge R7515 equipped with an AMD EPYC 7452 32-core processor and a dual-port 100Gbit Mellanox network adapter. The system was dual-booted with Proxmox and ESXi, and all storage was accessed using NVMe over TCP connected to Blockbridge 6 storage backends.
A total of 32 Ubuntu-based virtual machines were deployed on each platform, each assigned four virtual CPUs and one virtual disk. The I/O performance was measured using fio in server mode, coordinated by an external controller. Benchmarks were run across various block sizes (from 512 bytes to 128 KiB) and queue depths (from 1 to 128), with each workload measured over a 20-minute period after a 1-minute warm-up.
Key Results
Proxmox Dominates in IOPS
Proxmox VE delivered better IOPS than VMware in 56 of 57 test cases, with average gains nearing 50% and peak improvements exceeding 70% at high queue depths.
Average IOPS Improvement by Queue Depth:
- QD=1: +20,3%
- QD=16: +43,6%
- QD=128: +48,9%
Lower Latency With Proxmox
The open-source hypervisor also achieved significantly lower latency across all workloads, reducing response times by an average of 32,6%.
Average Latency Reduction by Queue Depth:
- QD=1: -16,0%
- QD=16: -30,4%
- QD=128: -32,6%
Higher Bandwidth for Proxmox
Under peak load, Proxmox sustained a data throughput of up to 12,8 GB/s, compared to VMware’s 9,3 GB/s. This 38% improvement is especially relevant for storage-intensive environments.
Architectural Differences
VMware relies heavily on SCSI-based virtualization and its VMFS file system, introducing additional scheduling layers. By contrast, Proxmox uses virtio-scsi with raw block devices, bypassing file system overhead and taking advantage of Linux’s no-op scheduler for NVMe.
VMware Path: PVSCSI > VMFS > SCSI shim > NVMe
Proxmox Path: Virtio-SCSI > Raw Block Device > Noop Scheduler > NVMe
This streamlined I/O stack in Proxmox contributes significantly to its performance advantage.
Conclusions
These results validate the advantages of open-source virtualization solutions in high-performance scenarios. Proxmox VE not only competes but surpasses VMware ESXi in critical metrics, especially for modern storage architectures like NVMe/TCP.
While VMware remains a dominant player in enterprise environments, Proxmox offers a compelling alternative with superior performance, reduced complexity, and no licensing costs. For sysadmins and infrastructure architects evaluating virtualization platforms, Proxmox VE merits serious consideration.
For full benchmark data and methodology, visit: Blockbridge