After more than two years with little visible movement, oVirt is back on the release radar with oVirt 4.5.7—a version that focuses on a very practical goal: keeping this long-running KVM-based virtualization management stack relevant while many organizations reassess infrastructure strategy and vendor dependency.
Version 4.5.7 was announced as generally available on January 13, 2026, and it sends a clear signal: even with the project now driven largely by the community, oVirt intends to remain a viable option for enterprises running KVM with centralized lifecycle management.
What’s new in oVirt 4.5.7, and why it matters
This isn’t a “flashy” release—but it directly addresses three areas that typically matter most in production:
1) Support for modern platforms (hosts and guests)
oVirt 4.5.7 expands support for current operating systems and packaging targets. The release announcement references availability for CentOS Stream 9 and 10, plus RHEL 9/10 and derivatives, and specifically highlights support for CentOS 10 and AlmaLinux 10.
In virtualization, this is not a minor detail: host OS support impacts kernel behavior, drivers, CPU enablement, tooling chains, and the ability to maintain a sustainable lifecycle.
2) A security fix: CVE-2024-7259
This release includes a fix for CVE-2024-7259.
The NVD (NIST) entry associates the issue with oVirt Engine and indicates that versions prior to 4.5.7 are affected (among other scope details).
If you run oVirt in production, this alone can justify scheduling a maintenance window.
3) Java stack modernization and operational improvements
The project states that all Java code now builds with Java 21, while retaining compatibility with Java 11.
It also includes “day-to-day operations” improvements—for example, faster host reconnection after reboot (instead of a static 10-minute wait), VM start behavior tweaks (PCIe port reservation), UI/admin enhancements, and cluster-level adjustments.
CPU compatibility: keeping pace with modern servers
A major theme in 4.5.7 is staying current on server hardware. The announcement lists new supported CPU families, including:
- AMD EPYC: Milan, Rome, and Genoa
- Intel: Sapphire Rapids
- IBM: POWER10
For organizations refreshing hardware every 3–5 years, this is essential: without CPU enablement, the platform effectively freezes.
Important note: oVirt Node NG
The release announcement includes a direct recommendation: users of oVirt Node NG are strongly encouraged to migrate to another approach “for stability and security,” due to how that product is structured.
For sysadmins, this is a warning sign worth taking seriously—especially if you standardize on minimal host images or appliance-style deployments.
Comparison: oVirt vs. common alternatives
oVirt remains compelling if you want a classic, centralized KVM manager with a familiar architecture. But the broader ecosystem has diversified significantly. Here is a practical, high-level comparison.
Quick positioning table (practical view)
| Platform | Focus | Strengths | Watch-outs | Typical fit |
|---|---|---|---|---|
| oVirt | Centralized KVM virtualization management | Historically mature, “enterprise KVM manager” model | Smaller mindshare today; trajectory depends on community pace | KVM environments that want a classic, centralized control plane |
| Proxmox VE | KVM + containers (LXC) with integrated management | Fast adoption, very large community, pragmatic clustering | Operational model differs from oVirt/RHV in places | SMB to mid-market, edge, cost-sensitive virtualization |
| OpenNebula | Private cloud (IaaS) over hypervisors | Strong cloud-oriented governance and multi-tenant options | Requires more “cloud mindset” than classic virtualization | Teams building private cloud portals and governance |
| OpenStack (with KVM) | Large-scale, modular cloud platform | Very powerful at scale; advanced networking/storage | High operational complexity | Large orgs with dedicated cloud operations teams |
| Harvester (HCI on Kubernetes) | HCI + VM management via cloud-native approach | Kubernetes-aligned platform convergence | Requires Kubernetes operating model | Organizations standardizing on Kubernetes |
| VMware vSphere (proprietary) | Enterprise virtualization “classic” | Mature ecosystem, broad integrations | Cost/licensing and vendor dependency | Enterprises prioritizing continuity and a long-established standard |
| Nutanix AHV (proprietary) | HCI with integrated hypervisor | Unified HCI operations and UX | Platform dependence and cost | Enterprises adopting HCI as the default architecture |
How to choose without turning it into ideology
- If you want to keep a centralized, traditional KVM control plane and you have oVirt/RHV experience, oVirt still makes sense—especially with renewed platform and security maintenance.
- If your priority is ease of deployment and streamlined day-to-day ops, many organizations gravitate toward Proxmox VE, particularly for consolidation and edge scenarios.
- If you need multi-tenant private cloud governance, the decision typically narrows to OpenNebula vs. OpenStack, where operational complexity becomes the deciding factor.
Bottom line
oVirt 4.5.7 is not a marketing-heavy release, but it matters in the real world: a return after a long gap, modern OS/CPU enablement, a specific CVE fix, and pragmatic improvements that affect uptime and operations.
For teams running KVM with a classic management approach, it’s a release worth evaluating. For everyone else, it’s also a useful signal: oVirt remains alive—but its long-term weight will depend on community velocity, adoption, and how well it aligns with today’s virtualization and cloud operating models.
