Live migration is one of those features that makes a Proxmox VE cluster feel “enterprise-grade” the moment you start using it seriously. When it works, it’s almost invisible: a VM moves from one node to another with little or no noticeable disruption. When it doesn’t, it tends to fail at the worst possible time — during maintenance windows, load spikes, or urgent incident response — and the root cause often boils down to something that’s easy to underestimate until it hurts: CPU compatibility across the entire cluster.
Real-world clusters are rarely built from identical servers purchased in a single batch. Nodes are added over months or years, CPU generations change, and feature sets evolve. The result is a familiar situation for many operators: you can run VMs perfectly fine on each node, but live migration becomes fragile because the guest CPU features exposed on one host don’t necessarily exist on another.
That’s the operational gap ProxCLMC (Prox CPU Live Migration Checker) aims to close — in a simple, automated, and reproducible way.
Why CPU compatibility breaks live migration in practice
In Proxmox VE, VM CPU configuration is flexible. You can pick a conservative CPU model, or you can expose a host’s full feature set. The second approach can be tempting — “use host, get maximum performance” — but in a heterogeneous cluster it can quickly become a trap.
If a VM is configured in a way that makes it rely on CPU features that only exist on a subset of nodes, migration may fail outright. Even worse, some configurations can lead to inconsistent behavior or performance degradation depending on where the VM is running. Mixed clusters are exactly where operators end up spending time comparing CPU flags, documenting “safe” baselines, and building their own internal rules for what CPU types can be used.
In other virtualization ecosystems, similar problems are addressed with cluster-wide CPU compatibility baselines (the classic reference being VMware’s EVC concept). Proxmox VE, however, doesn’t currently ship a built-in mechanism that automatically detects a compatible CPU baseline across all cluster members. That leaves administrators with manual comparison and operational experience.
ProxCLMC is designed to automate that manual process — and deliver a deterministic answer.
What ProxCLMC is (and what it is not)
ProxCLMC is a small, focused tool that answers one question:
“Which VM CPU type can I safely use across all nodes in this Proxmox VE cluster, so live migration remains reliable?”
It does this by inspecting every node in the cluster and calculating the highest CPU compatibility level supported by all nodes — essentially the “lowest common denominator,” but expressed in a way that maps cleanly to Proxmox/QEMU CPU models.
Important nuance: ProxCLMC does not automatically reconfigure your Proxmox cluster or edit VM settings. Instead, it gives you a clear output you can apply operationally (for example, as a standard CPU model for VM templates, or as a baseline for specific workloads that must be freely migratable).
How ProxCLMC works under the hood
The workflow is deliberately pragmatic, designed to integrate into existing Proxmox VE clusters without extra agents, daemons, or configuration changes.
- Cluster discovery via
corosync.conf
ProxCLMC parses the localcorosync.confon the node where you run it. That means it automatically discovers the cluster members without relying on external inventories, spreadsheets, or manual node lists. - Remote inspection over SSH
Once nodes are identified, the tool establishes an SSH connection to each one and reads/proc/cpuinfo. This file is a direct, authoritative view of the CPU capabilities exposed by the kernel, including the supported CPU flags. - Mapping CPU flags to standardized x86-64 baselines
ProxCLMC extracts the relevant flags and evaluates them against well-defined x86-64 baseline definitions aligned with CPU models supported by Proxmox VE and QEMU, including:x86-64-v1x86-64-v2-AESx86-64-v3x86-64-v4
- Computing the cluster-wide CPU type
Each node is classified by the highest baseline it supports, and then ProxCLMC calculates the baseline that is shared by every node. That shared baseline becomes the recommended CPU type for the whole cluster — the maximum you can safely use while maintaining unrestricted live migration between all nodes.
A typical output looks like this (example format):
test-pmx01 | 10.10.10.21 | x86-64-v3test-pmx02 | 10.10.10.22 | x86-64-v3test-pmx03 | 10.10.10.23 | x86-64-v4
Cluster CPU type: x86-64-v3
This is the key operational value: even if one node can do x86-64-v4, your safe baseline for “migrate-anywhere” VMs remains x86-64-v3 as long as at least one node tops out there.
Designed for operations: deterministic output and pipeline-friendly usage
One of the most practical features in ProxCLMC is that it’s built for real operations, not just “nice-to-have” diagnostics.
- It lists each node with the baseline it supports.
- It prints the cluster-wide CPU type clearly.
- It also provides a pipeline-friendly mode:
--list-only, which returns only the final CPU type on stdout (for scripting, CI checks, cluster validation steps, or pre-maintenance runbooks).
That small design choice matters because CPU compatibility isn’t just a one-time check — it’s something you want to validate:
- before adding a new node,
- after hardware refresh cycles,
- when you inherit mixed equipment,
- when standardizing VM templates,
- or before maintenance events that rely on mass migration.
Installation options: repository-based or offline package
ProxCLMC is open source, written in Rust, and licensed under GPLv3. It is distributed as a Debian package for easy deployment in Debian-based environments, including Proxmox VE.
Requirements
Before installation and use, the key prerequisites are:
- A Proxmox VE cluster
- SSH authentication between nodes (passwordless SSH is recommended for smooth automation)
- Network connectivity between all cluster members
Install via Debian repository (recommended)
The project is distributed via gyptazy’s Debian repository (the same repo family used for other tooling like ProxLB). The documented installation pattern is:
- Add the repository
- Import the signing key
- Install
proxclmcvia APT
Install via .deb package (offline-friendly)
For environments without direct repo access, ProxCLMC can also be installed from a prebuilt .deb downloaded from gyptazy’s CDN and installed with dpkg.
The project documentation shows versions such as proxclmc_1.0.0_amd64.deb, and the GitHub README example also references proxclmc_1.2.0_amd64.deb, indicating active iteration and packaging updates around early January 2026.
What changes in day-to-day Proxmox administration
ProxCLMC doesn’t “make Proxmox faster” by itself. Its value is more important than that: it reduces uncertainty.
- Fewer failed live migrations caused by CPU feature mismatches.
- More predictable VM behavior across nodes.
- A standardized, documented baseline you can apply to templates and production VMs.
- Faster decision-making when expanding a cluster: you can immediately see whether a new node will force a lower cluster baseline.
- Better operational hygiene: you stop relying on tribal knowledge and start relying on repeatable checks.
It’s also a reminder of a broader truth in open infrastructure: when a missing “enterprise feature” is a real operational pain point, the open-source ecosystem often fills the gap with tools that are transparent, auditable, and narrowly engineered to solve the problem well.
Frequently asked questions
What is the main benefit of ProxCLMC in a Proxmox VE cluster?
It automatically determines the maximum CPU compatibility baseline shared by all nodes, so administrators can choose a VM CPU type that supports reliable live migration across the entire cluster.
When should you run ProxCLMC in production?
Run it whenever cluster hardware changes: adding/replacing nodes, after refresh cycles, or before maintenance windows that rely on mass live migrations. It’s also useful when standardizing VM templates for mixed-hardware environments.
Does ProxCLMC improve VM performance?
Not directly. It prevents risky CPU configurations that may break migrations or cause inconsistent behavior. The chosen baseline may be more conservative than the newest node supports, but it trades a small amount of potential CPU feature exposure for reliability and mobility.
Can ProxCLMC handle heavily mixed CPU generations?
Yes — it will still compute the shared baseline. The result may be lower than you’d like, but that’s precisely the point: it gives you the hard truth you need to decide whether to segment workloads, redesign the cluster, or accept a conservative CPU type for migratable VMs.
Source: GitHub repository: gyptazy/ProxCLMC
