Moving from VMware ESXi/vCenter to Proxmox VE has gone from “interesting” to inevitable for many teams in 2025. Budgets are tighter, licensing is simpler on the Proxmox side, and the product has matured with built-in ESXi import, live-import, HA/SDN, and a robust backup stack. This guide brings everything into one place—what to prepare, how to migrate (automatic and manual), how to size/optimize VMs on Proxmox VE, and how to avoid the usual pitfalls—so you can rank for vmware to proxmox and actually get the job done.


Why migrate from VMware to Proxmox VE?

  • Cost & simplicity: Proxmox VE is FLOSS (AGPLv3). Optional subscriptions give you enterprise repos and support, but features aren’t paywalled.
  • Modern stack: Debian base, custom Linux kernel, KVM/QEMU for VMs, LXC for containers, SDN, firewall, REST API, GUI & CLI.
  • Flexible storage: local ZFS, LVM, Btrfs; shared Ceph RBD, NFS/SMB, iSCSI/FC; plugins & content types (ISO, templates, backups).
  • Production-grade clustering: multi-master with Corosync (quorum), HA with failover, fencing, and live-migration (when configured properly).
  • Integrated backups: Proxmox Backup Server (dedup + incremental + live-restore) or classic vzdump.

Bottom line: You can lift-and-shift ESXi workloads, keep downtime minimal, and still end up with a simpler, open platform.


Before you start: migration prep checklist (do this first)

  1. Version & updates
    • Use Proxmox VE 8+ (or newer) with latest updates. For new clusters, ≥3 nodes recommended (quorum).
  2. Networking plan
    • Dedicated Corosync network(s) for the cluster; Linux bridges (vmbrX) for VM networks; bonds/VLANs as needed.
  3. Storage plan
    • Decide local vs shared. For shared, prefer Ceph RBD or NFS/SMB. With SAN (FC/iSCSI), understand snapshot limits (see below).
  4. VM inventory & constraints
    • Note BIOS/UEFI, disk type, controllers, TPM usage, vTPM encryption, static IPs/DHCP reservations, any vSAN dependencies, snapshot chains.
  5. Guest tools & drivers
    • Uninstall VMware tools in the guest. Prepare VirtIO drivers (Windows), confirm virtio-scsi is in initramfs (Linux).
  6. Security/crypto
    • If VMs use full-disk encryption/vTPM, export keys/disable vTPM temporarily; vTPM state cannot be migrated.
  7. Windows specifics
    • Plan to switch boot disk to VirtIO SCSI (after drivers). Initially, boot with IDE/SATA if needed; then migrate to VirtIO.
  8. Cutover window
    • Pick test VMs, dry-run the flow, then schedule production with rollback plan.

Proxmox VE best-practice VM settings (so migration doesn’t bite later)

  • CPU type:
    • Homogeneous CPUs in the cluster? host CPU type.
    • Mixed CPUs or future expansion? Use a generic x86-64-v<X> model to preserve live-migration.
  • Network: VirtIO NIC for lowest overhead (fall back to e1000/rtl8139 only for very old OSes).
  • Memory: Enable Ballooning Device (helps telemetry even if you don’t balloon).
  • Disk: SCSI bus + VirtIO SCSI single controller; discard (trim) on thin-provisioned storage; IO thread enabled.
  • QEMU agent: install the qemu-guest-agent in the VM for clean shutdowns, IP reporting, commands, etc.
  • Boot mode: match source—SeaBIOS (legacy) vs OVMF/UEFI; if UEFI fails, add the custom boot entry in UEFI and ensure an EFI Disk exists.

Method 1 — Automatic ESXi import (fastest path “VMware to Proxmox”)

Proxmox VE ships an integrated ESXi importer:

  1. Add ESXi import source: Datacenter → Storage → Add → ESXi. Use ESXi host creds (vCenter works but is slower).
  2. Browse & select VMs in the import panel.
  3. Choose target storage/bridge, tweak hardware mapping (NIC model, ISO, per-disk storage).
  4. Power down VM in ESXi, then start import in Proxmox.
  5. Boot, install VirtIO/QEMU agent, fix network MAC/DHCP reservations, validate services.

Notes & caveats

  • Works from ESXi 6.5 → 8.0.
  • Datastores with ‘+’ or special chars may fail—rename first.
  • vSAN-backed disks: not importable; move disks off vSAN first.
  • Encrypted disks (policy-based) can’t be imported until encryption is removed.
  • Snapshots on the source slow imports. Consider consolidating beforehand.

Live-import to reduce downtime

Imports the minimal disk data first, starts the VM, streams the rest in background. Expect reduced I/O for a while; don’t use on lossy/slow links. If the import fails, writes since start are lost—test on a staging VM first.

Mass import without hurting ESXi

Keep parallelism low (rule of thumb: ≤4 disks importing at once). ESXi API rate-limits fast; Proxmox’s esxi-folder-fuse tries to help, but don’t spawn dozens of jobs. Watch RAM on the Proxmox host (readahead cache).


Method 2 — OVF/OVA export + qm importovf (portable & reliable)

When the GUI importer isn’t an option:

  1. On a Linux box (or Proxmox host), install/unzip ovftool from VMware.
  2. Export from ESXi/vCenter, e.g.: ./ovftool vi://root@ESXI-FQDN/VM-Name /exports/VM-Name/ # or via vCenter: ./ovftool vi://user:pass@VCENTER/DC/vm/VM-Name /exports/VM-Name/
  3. Move {VM}.ovf + matching .vmdk into reach of the Proxmox host.
  4. Import: qm importovf <vmid> VM-Name.ovf <target-storage>
  5. Add NIC, set CPU type & SCSI controller: qm set <vmid> --cpu x86-64-v2-AES --scsihw virtio-scsi-single
  6. For Windows, temporarily attach the disk as IDE/SATA to boot, install VirtIO storage drivers, then switch to VirtIO SCSI.

Method 3 — qm disk import (direct VMDK import/convert)

If you can access the *.vmdk (and *-flat.vmdk) from Proxmox (e.g., NFS/SMB share):

  1. Create a target VM in Proxmox (remove default disk).
  2. SSH/shell on the Proxmox node, cd into the share path (e.g., /mnt/pve/<share>/<VM>).
  3. Import (auto-converts to target format): qm disk import <vmid> Server.vmdk <target-storage> --format qcow2
  4. Attach the now “Unused” disk to the VM (SCSI/VirtIO if drivers are ready).
  5. Set boot order in Options → Boot Order.

Method 4 — Attach & Move (near-zero downtime)

When both hypervisors can see the same share:

  1. Add that share to Proxmox as storage (Disk Image) → it mounts at /mnt/pve/<storage>.
  2. Create the target VM; for the OS disk, point to the share and format=vmdk (Proxmox will create images/<vmid>/vm-<vmid>-disk-0.vmdk).
  3. Overwrite that descriptor with the source VMDK and edit its Extent path to point relatively to the source *-flat.vmdk (e.g., ../../Server/Server-flat.vmdk).
  4. Power on the VM in Proxmox (boots off the source flat file).
  5. While running, Disk Action → Move Storage to final storage (then delete the temporary VMDK descriptor).
  6. After cutover, uninstall VMware tools, install QEMU agent & VirtIO storage/net, test, switch to VirtIO SCSI.

Storage choices that won’t surprise you on Proxmox

  • Local ZFS: great for small clusters; combine with Replication for resilience (note: async; tiny RPO gap possible).
  • Ceph RBD (recommended shared): first-class integration, snapshots, live-migration, HA.
  • NFS/SMB: fine for qcow2 (snapshots for VMs); containers can’t snapshot on qcow2.
  • SAN (FC/iSCSI):
    • Shared LVM-thick: simple, but snapshots aren’t native. In Proxmox VE 9.0, “Snapshots as Volume-Chain” (tech preview) enables qcow2-style snapshots on LVM-thick (TPM state excluded).
    • One LUN per disk: snapshot-capable but high-maintenance—avoid at scale.
    • ZFS over iSCSI: works with supported boxes.
    • Always configure multipath for redundancy.

Tip: If snapshots are a hard requirement and SAN options are messy, consider NFS or Ceph, or adopt a backup + live-restore strategy with Proxmox Backup Server.


High Availability: what you must get right

  • Quorum & Corosync: ≥3 votes (use QDevice in 2-node setups). Keep latency low; isolate Corosync from storage/backup traffic.
  • Fencing: nodes that lose quorum self-fence to protect data; this is by design.
  • Shared access: HA needs guest disks on shared storage (or ZFS replication with its caveats).
  • No COLO (lockstep) yet for dual-run VMs (QEMU COLO is still under development upstream).

After the migration: the 10-minute hardening pass

  • Re-add DHCP reservations or set static IPs on the new NIC MACs.
  • Enable firewall rules at Datacenter/Node/VM as needed.
  • Turn on discard for thin storage, verify IO thread on heavy VMs.
  • Install qemu-guest-agent everywhere (Windows & Linux).
  • Backups: schedule to Proxmox Backup Server (dedup + incremental + live-restore).
  • Monitoring: expose metrics; check balloon/CPU steal; baseline VM latency and throughput.
  • Live-migration smoke test between nodes.

Troubleshooting: common “vmware to proxmox” gotchas

  • VM won’t boot (virtio-scsi missing): switch disk to IDE/SATA to boot; install VirtIO storage drivers; switch back to VirtIO SCSI.
  • UEFI boot black hole: add custom EFI entry and ensure an EFI Disk is attached.
  • TPM/vTPM: state isn’t portable; decrypt externally or disable vTPM, then re-enable on Proxmox.
  • vSAN: move disks off vSAN before import.
  • Slow imports: collapse snapshots, avoid vCenter path if performance is critical, limit parallel jobs.

Sample timelines (realistic cutovers)

  • Single medium VM (200 GB), same datacenter over 10 GbE
    • Prep + test: 30–60 min
    • Automatic import: 15–45 min (depending on storage)
    • Post steps (drivers, IPs): 10–20 min
    • Total downtime: typically 10–30 min (or much less with live-import)
  • Batch of 20 VMs (mixed sizes)
    • Stagger imports (≤4 disks at once), overnight for heavy disks
    • Parallel team: one watching ESXi API logs, one attaching drivers/agent, one validating services

SEO bonus: quick comparison table (VMware → Proxmox VE)

TopicVMware (ESXi/vCenter)Proxmox VE
License modelProprietaryFLOSS (AGPLv3) + optional subscription
VM formatVMDKQCOW2/RAW (import VMDK supported)
Live-importn/aBuilt-in (reduced downtime)
HA/ClustervSphere HA, DRSCorosync + HA, live-migration
BackupVADP ecosystemProxmox Backup Server (dedup, incremental, live-restore)
SDN/NetworkvDS/NSXLinux bridge/Bond/VLAN + SDN
Shared storagevSAN/VMFS/NFSCeph/NFS/SMB/iSCSI/FC (plugins)

FAQs (rich-snippet friendly)

How do I migrate VMware ESXi to Proxmox VE with minimal downtime?
Use the automatic ESXi import with live-import enabled, or the Attach & Move method via a shared datastore. Prep VirtIO drivers first to avoid driver flips during the cutover.

Can I import VMs directly from vCenter to Proxmox VE?
Yes, but performance is typically better via ESXi host import. For OVF, export with ovftool and run qm importovf on Proxmox.

What’s the fastest way to convert VMDK to Proxmox format?
qm disk import <vmid> <file.vmdk> <storage> --format qcow2 converts on the fly and attaches as an “Unused” disk ready to map.

Do Proxmox clusters require three nodes?
For stable quorum, yes—≥3 votes. In 2-node setups, add a QDevice to provide the third vote.

Are SAN snapshots supported on Proxmox VE?
Depends on the layout. Shared LVM-thick is simple but lacks native snapshots (Proxmox VE 9.0 adds Snapshots as Volume-Chain tech preview for qcow2). Ceph/NFS with qcow2 are snapshot-friendly for VMs.


TL;DR — the shortest route “VMware to Proxmox”

  1. Clean the guest (remove VMware tools, prep VirtIO/QEMU agent, note IP/DHCP).
  2. Automatic ESXi import (or OVF + qm importovf) to your target storage.
  3. Switch disks to VirtIO SCSI, NIC to VirtIO, enable balloon, install qemu-guest-agent.
  4. Validate, then cut over (live-import or Attach & Move for near-zero downtime).
  5. Backups & HA: point at Proxmox Backup Server, test live-migration.

If you need enterprise support, add subscriptions per node and use the Enterprise repo; for community help, lean on the forum and docs. Either way, a well-planned migration from VMware to Proxmox VE is entirely achievable—and much easier in 2025 than it used to be.

Scroll to Top