If you're managing virtual machines and containers with Proxmox Virtual Environment, version 9.0 brings some substantial updates worth understanding before you upgrade. Released in August 2025, this marks Proxmox's 20th year and includes changes that affect everything from storage to networking.
Let's explore what's different between versions 8 and 9, and walk you through the upgrade process step by step.
The foundation: Debian 13 Trixie
The most significant change in Proxmox 9.0 is the move from Debian 12 Bookworm to Debian 13 Trixie. This represents a major distribution upgrade that brings updated libraries, newer package versions, and improved hardware support across the board. Debian 13 provides the foundation for years of future updates and security patches.
What does this mean practically? You get better compatibility with modern hardware, updated programming languages and system tools, and support for newer software that requires recent dependencies. The shift also extends your security update timeline, giving you more breathing room before the next major upgrade cycle becomes necessary.
Kernel and virtualization updates
Proxmox 9.0 ships with Linux kernel 6.14.8-2 as the stable default, a significant jump from the 6.8 kernel in Proxmox 8.3. The newer kernel brings improved hardware compatibility, particularly for recent CPUs and NVMe storage devices. If you're running newer Intel or AMD processors, you'll benefit from updated microcode, better power management, and improved performance optimizations.
QEMU jumps to version 10.0.2, handling the actual virtual machine emulation. The update includes better guest operating system compatibility, improved VirtIO drivers, and more efficient instruction emulation. You'll likely notice better VM performance, particularly for I/O-intensive workloads and graphics-heavy applications.
LXC container support moves to version 6.0.4, bringing refined cgroup v2 integration and better resource isolation. If you're running containers alongside VMs, the updated LXC version provides more predictable behavior when allocating CPU and memory resources. Container networking also sees improvements for complex configurations with VLANs or multiple bridges.
Storage improvements: LVM snapshots
One of the most requested features finally arrives in Proxmox 9.0: snapshot support for thick-provisioned LVM shared storage. This matters significantly for enterprise environments using iSCSI or Fibre Channel SAN storage, where snapshots previously weren't available without vendor-specific integrations.
The implementation uses volume chains, where each snapshot volume only records differences from its parent. This approach works independently of your storage hardware, giving you consistent snapshot functionality across different SAN vendors. Directory, NFS, and CIFS storages also gain support for snapshots as volume chains.
For organizations with traditional SAN infrastructure who've relied on clustered file systems, this closes a major feature gap. You can now manage snapshots consistently across your environment without depending on proprietary storage vendor tools. This independence simplifies backup workflows and disaster recovery planning.
Storage backend updates
For those running Ceph storage, Proxmox 9.0 includes Ceph Squid version 19.2.3. This brings performance optimizations to the BlueStore backend, improved metadata handling, and more efficient recovery operations when OSDs fail or rejoin the cluster. If you're running hyper-converged infrastructure where Proxmox nodes also serve as Ceph storage nodes, you'll see better IOPS characteristics and improved latency for random workloads.
ZFS support now includes OpenZFS 2.3.3, which introduces a highly anticipated feature: adding new devices to existing RAIDZ pools with minimal downtime. Previously, expanding a RAIDZ pool required creating a new pool and migrating data. Now you can add capacity to existing pools more flexibly, making ZFS more practical for growing storage needs.
SDN fabric support
Proxmox 9.0 introduces fabric support for Software-Defined Networking, designed to simplify the configuration and management of complex routed networks. This feature matters most for larger deployments where you need reliable multi-path networking with automatic failover capabilities.
SDN Fabrics supports two routing protocols: OpenFabric and OSPF. This lets you build two-layer spine-leaf network architectures with improved redundancy and performance. The feature simplifies management of dynamically routed networks, which work well as EVPN underlays or full-mesh networks for Ceph storage traffic.
The implementation provides automatic failover across multiple NICs, helping you build more resilient network configurations. For organizations running complex network topologies, the SDN Fabrics feature reduces configuration complexity while improving network reliability.
High availability affinity rules
Proxmox 9.0 introduces HA resource affinity rules, giving you fine-grained control over where resources run in your cluster. This addresses scenarios where you need to keep related VMs together or specifically separate them for redundancy.
For interdependent applications, like an application server and its database, you can configure rules to keep them on the same physical node to minimize network latency. For services requiring maximum redundancy, like multiple instances of the same critical application, rules can keep these instances on different nodes to improve fault tolerance.
The affinity rules work during normal operations and also influence behavior during HA failovers. This helps you optimize performance and resiliency for complex, interconnected workloads without manual intervention during maintenance or failure scenarios.
Mobile interface redesign
The Proxmox VE mobile interface gets a complete rework in version 9.0, rebuilt using the Proxmox widget toolkit powered by the Rust-based Yew framework. The redesigned interface provides quick access to service overviews and includes essential management functions like starting and stopping VMs and containers.
The mobile interface isn't meant to replace the full web interface, but it makes basic management tasks much easier when you're away from your desk. You can check cluster status, view resource usage, and perform common operations directly from your phone's browser without needing to pinch and zoom your way through the desktop interface.
What breaks and what to watch
Not everything transitions seamlessly from Proxmox 8 to 9. The most significant consideration involves Ceph versions; if you're running Ceph, you'll want to verify you're on a supported version before upgrading Proxmox itself. Ceph Squid requires careful planning for the upgrade path.
Authentication configurations might need attention. The PAM stack updates in Debian 13 can sometimes cause issues with custom LDAP integrations or complex directory services setups. If you've configured external authentication, test thoroughly in a non-production environment first.
Network configuration handling continues evolving toward systemd-networkd. Most standard bridge and bonding configurations work without modification, but complex setups might need adjustments. If you have intricate network configurations with VLANs, multiple bonds, or custom routing, review these carefully during upgrade testing.
Some older kernel modules get deprecated or removed. If you're relying on out-of-tree drivers for specialized hardware, check compatibility before upgrading production systems. The same applies to custom kernel modules you might have compiled for specific purposes.
Preparing for the upgrade
Before touching production systems, run the pre-upgrade checker. Proxmox 8 includes a tool called pve8to9 that scans your configuration and identifies potential issues. Install it if you haven't already:
apt update
apt install pve8to9
Then run a full check:
pve8to9 --full
The tool examines your system configuration, package versions, storage setup, and cluster configuration. It highlights anything that might cause problems during the upgrade. Pay attention to warnings about deprecated features or incompatible configurations. Don't ignore these warnings; they're based on real issues that other users have encountered.
Take backups of your configuration files. At minimum, back up /etc/pve, /etc/network/interfaces, and any custom systemd units or scripts you've added. Don't rely solely on the cluster configuration database; have offline copies stored somewhere safe, preferably on a different physical system.
If you're running Ceph, verify your version and upgrade path. Check your Ceph version with:
ceph version
Follow the official Ceph upgrade documentation for your specific version path. Don't skip Ceph versions; each major release has its own upgrade requirements and compatibility considerations.
Test your backup system by actually restoring a VM or container. Discovering backup problems during an upgrade is terrible timing. Verify that restored VMs boot correctly and that applications function properly. This validates both your backup process and your ability to recover if something goes wrong.
The upgrade process
Once you've completed the pre-upgrade checks and backups, the actual upgrade follows Debian's standard distribution upgrade path. Start by updating your APT sources.
Edit /etc/apt/sources.list and replace all instances of bookworm with trixie. Do the same for /etc/apt/sources.list.d/pve-enterprise.list (or the no-subscription repository file if you're using that instead):
deb http://download.proxmox.com/debian/pve trixie pve-no-subscription
Update your package lists:
apt update
You'll see notices about repository changes, which is expected during a major version upgrade. Review any warnings carefully; some might indicate configuration issues that need attention before proceeding.
Start the distribution upgrade:
apt dist-upgrade
This command downloads new packages and handles dependency resolution. The process typically takes 20 to 90 minutes depending on your internet connection and system speed. You'll see various package configuration prompts during the upgrade.
When prompted about configuration files, you'll need to decide whether to keep your version or accept the maintainer's version. For most system files, accepting the maintainer's version is safer unless you've made specific customizations you need to preserve. Review each prompt carefully; blindly accepting everything can overwrite important customizations, while blindly keeping everything can leave you with outdated configurations.
After package installation completes, reboot the system:
reboot
The reboot loads the new kernel and starts services with updated libraries. Don't skip this step; some services won't function correctly until the system restarts with the new kernel and updated systemd.
Upgrading a cluster
Cluster upgrades require more coordination than single-node upgrades. You can't upgrade all nodes simultaneously without risking cluster communication problems. The recommended approach is sequential: upgrade one node, verify it's working correctly, then move to the next.
Before upgrading any node, migrate all running VMs and containers to other cluster members. Use live migration where possible to minimize service disruption:
qm migrate <vmid> <target-node>
For containers, use:
pct migrate <ctid> <target-node>
Start with a non-primary node if possible. This gives you a fallback if something goes wrong during the first upgrade. After upgrading the first node and confirming it rejoins the cluster successfully, move to the next node.
The cluster tolerates mixed versions during the upgrade window, but don't leave nodes on different versions longer than necessary. Complete the entire cluster upgrade within a reasonable timeframe, preferably during a single maintenance window if your environment allows.
Monitor cluster quorum throughout the process:
pvecm status
With a three-node cluster, you need at least two nodes operational at all times. Larger clusters give you more flexibility, but the principle remains: maintain quorum or risk cluster communication failures that can affect running workloads.
Post-upgrade verification
After all nodes are upgraded, verify that everything works as expected. Check that VMs start properly, containers respond correctly, and storage pools remain accessible. The web interface should show version 9.0 in the summary page.
Test live migration between nodes if you haven't already during the upgrade process:
qm migrate <vmid> <target-node> --online
This confirms cluster communication works correctly and shared storage configurations survived the upgrade intact.
If you're running Ceph, check cluster health thoroughly:
ceph status
All placement groups should show as active and clean, with no OSDs marked down. Run a manual scrub to verify data integrity:
ceph pg scrub <pg-id>
Verify your backup schedules are still running. Sometimes systemd timer updates can affect scheduled tasks. Run a manual backup job to confirm the entire backup chain works:
vzdump <vmid> --mode snapshot --compress zstd
Test the new LVM snapshot functionality if you're using shared LVM storage. Create a test snapshot, verify it works, and then remove it. This validates that the new snapshot features work correctly with your specific storage configuration.
If you've configured SDN, verify that network zones and VNets still function correctly. Test connectivity between VMs in different zones and verify that firewall rules still apply as expected.
When to upgrade
Version timing depends on your environment and requirements. If you need LVM snapshot support for iSCSI or Fibre Channel storage, that's a compelling reason to upgrade sooner. The improved SDN fabric support matters most if you're building or expanding complex network architectures.
However, if your Proxmox 8 installation is stable and meets your current needs, there's no immediate pressure. Proxmox 8 continues receiving security updates and bug fixes until its end-of-life date, giving you time to plan the migration carefully.
For production environments, schedule upgrades during maintenance windows when you can tolerate potential issues. If possible, test the upgrade on a development cluster first. This lets you identify any environment-specific problems before they affect production workloads.
Consider hardware refresh cycles too. If you're planning new hardware deployments in the coming months, installing Proxmox 9.0 directly on new nodes makes more sense than upgrading existing systems just to upgrade again soon after.
Frequently asked questions about Proxmox upgrades
Can I skip from Proxmox 7 to Proxmox 9?
No, you can't skip major versions. Each major release includes changes that require the intermediate upgrade step for proper configuration migration. You'll need to upgrade from 7 to 8 first, then from 8 to 9. Attempting to skip versions will likely leave your system in a broken state with incompatible configurations.
How long does a Proxmox upgrade take?
For a single node, expect 30 to 90 minutes depending on internet speed and system performance. Cluster upgrades take longer because you're upgrading nodes sequentially and migrating workloads between them. A three-node cluster might take 3 to 5 hours to upgrade completely, including migration time and verification steps.
Will my virtual machines keep running during the upgrade?
No, the node being upgraded requires a reboot, so you'll need to migrate VMs to other cluster nodes first. In a single-node setup, expect downtime during the upgrade and reboot process, typically 40 to 75 minutes. Plan accordingly and notify users about the maintenance window well in advance.
Do I need to upgrade Ceph before upgrading Proxmox?
If you're using Ceph storage, verify you're running a compatible version before upgrading Proxmox. Proxmox 9.0 includes Ceph Squid, but the upgrade path depends on your current Ceph version. Check the Ceph documentation for the proper upgrade sequence, as attempting to run incompatible versions can cause storage access problems.
What happens to my backups after upgrading?
Existing backups remain fully compatible and accessible after upgrading. The backup format hasn't changed between versions 8 and 9. However, verify that scheduled backups continue running correctly after the upgrade, as systemd timer changes can occasionally affect automated tasks. Run a manual backup after upgrading to confirm everything works.
Can I roll back if the upgrade fails?
Rolling back a major Proxmox upgrade is difficult and generally not recommended. The Debian distribution upgrade modifies system libraries and configurations in ways that aren't easily reversible. This is why testing in a non-production environment and maintaining good backups are so important. If you encounter serious problems, restoring from backups or rebuilding the node is often cleaner than attempting to downgrade.
Do the new LVM snapshots work with my existing SAN?
The new LVM snapshot functionality in Proxmox 9.0 works with any iSCSI or Fibre Channel SAN that presents LVM volumes to Proxmox. The implementation is storage-vendor-independent, so it should work regardless of your SAN manufacturer. Test thoroughly in a non-production environment first to verify compatibility with your specific storage configuration.
Conclusion
Proxmox 9.0 delivers meaningful improvements that address real-world needs, and the foundation Debian 13 provides will bring stability for years to come, while features like LVM snapshot support and SDN fabrics solve problems that enterprise users have been working around. Plus, overall, the upgrade process is straightforward.
While it's not necessary to update immediately at the time of writing (late 2025), it's worth considering doing soon.
Thanks for reading! If you're looking for infrastructure to run Proxmox or need hardware that handles demanding virtualization workloads, xTom provides dedicated servers and colocation services built for production environments. For smaller deployments or testing, V.PS offers NVMe-powered KVM VPS hosting that scales with your needs.
Ready to discuss your infrastructure needs? Contact our team to explore the right solution for your virtualization projects.