These lessons come from years of working across the Oracle virtualisation stack – deploying and maintaining OLVM, migrating from Oracle VM to OLVM, running Oracle workloads on VMware, and VMware-to-OLVM migrations. This isn’t a rewrite of Oracle’s documentation. It’s what we’ve learned from production environments where getting it wrong had real consequences.
Some of these are things we wish we’d known earlier. Others are things the documentation covers but doesn’t emphasise enough. All of them apply directly if you’re planning a VMware exit to OLVM.
Here’s what matters.
Keep Your Existing Network Configuration
The temptation on any platform migration is to “fix” the networking while you’re at it. Resist that urge.
Keep your existing network configuration in place where possible. OLVM supports VLANs and Open vSwitch, and in most cases your existing network architecture will map across without major changes. Introducing network redesign into a hypervisor migration adds complexity and risk that isn’t necessary. Once you have migrated to OLVM it is relatively easy to migrate to a target physical/virtual network infrastructure.
When provisioning your KVM hosts, keep the initial host network configuration to the bare minimum required for OLVM Engine connectivity – typically just a management network and whatever VLAN or bonding is needed to get SSH working. Everything else – additional VLANs, guest networks, storage networks – should be added through the OLVM Engine once the host has been registered to the cluster.
This leads to a related point that’s easy to learn the hard way: don’t modify the network configuration on the KVM hosts directly. OLVM manages host networking through the Engine. If you SSH into a KVM host and start modifying the network config manually (either by using command utilities or manually editing files), you’ll create a mismatch between what OLVM thinks the network looks like and what it actually looks like. That mismatch can be a nightmare to resolve particularly when you have a complex network topology.
Make your network changes through OLVM. Let the platform manage its hosts.
Prepare Your Guests Before Migration
If you’re running Oracle Linux guests on VMware, update them to the latest OL release level and UEK (Unbreakable Enterprise Kernel) version before or immediately after you migrate. Don’t carry old kernel versions across to the new platform.
This isn’t just about currency. The latest UEK versions include virtio driver improvements, KVM-specific optimisations, and bug fixes that directly affect how well your guest VMs perform on OLVM. Migrating with an outdated kernel means you’re potentially running on a platform that’s optimised for virtio I/O with a kernel that isn’t taking full advantage of it.
If you’re running Windows-based VMs, make sure to install the oVirt guest drivers (VirtIO-Win) before or immediately after migration. Without these drivers, Windows guests won’t have access to the paravirtualised storage and network interfaces that OLVM relies on – meaning poor performance at best and unbootable VMs at worst.
Update first, migrate second.
Oracle Database VMs: Memory and Resource Configuration
Don’t Enable Memory Ballooning
Memory ballooning lets OLVM reclaim unused memory from a guest VM and reallocate it to other guests. It’s a useful feature for general-purpose workloads.
Don’t enable it on Oracle Database VMs.
Oracle’s SGA and PGA memory management assumes it owns the memory it’s been allocated. If the balloon driver reclaims memory that Oracle believes it has available, you’ll see performance degradation that’s difficult to diagnose – it presents as I/O problems or buffer cache inefficiency rather than an obvious memory shortage. The database doesn’t know its memory has been taken away.
More broadly, Oracle does not support memory overcommit for any VMs running Oracle software. The same applies to WebLogic servers – Oracle recommends avoiding ballooning on WebLogic VMs as well. Use ballooning on your web servers and other workloads where the memory profile is more flexible.
Configure HugePages and Start Database VMs First
Enable HugePages for Oracle Database VMs. HugePages reduce the overhead of managing Linux page tables for the large memory allocations that Oracle’s SGA requires, and can significantly improve database performance on virtualised environments.
OLVM’s “High-Performance” VM optimisation profile, available through the OLVM web interface, automatically configures a number of recommended settings for maximum efficiency – including HugePages-related options. Use it for database VMs.
One operational detail that’s easy to overlook: if your Oracle Database VMs are configured to use HugePages, start them first – before other VMs on the same host. HugePages memory must be allocated as a contiguous block. If other VMs start first and fragment the host’s memory, the database VMs may not get the clean chunk of memory they need, and HugePages allocation will fail. It’s a small sequencing detail that can cause disproportionate pain.
OLVM High Availability Settings for Oracle RAC VMs
If you’re running Oracle RAC on OLVM, the HA configuration for your database VMs requires specific attention. Oracle Clusterware manages its own high availability – it doesn’t need OLVM’s HA competing with it.
For RAC VMs, configure the following in OLVM:
- Highly Available: disabled
- Target Storage Domain for VM Lease: No VM Lease
- Resume Behavior: Kill
This matters more than it sounds. If these settings aren’t correct, any pause or suspend operation – which can happen during storage issues – could trigger Oracle Clusterware to fence the suspended VM. In worst cases, this can lead to block corruption. Let Clusterware handle database HA. Keep OLVM out of its way.
Enable Virtual Queues for Network Performance
Virtio-net virtual queues (multiqueue) allow network traffic to be processed across multiple vCPUs rather than a single one. For VMs handling significant network throughput – database servers, application servers handling many concurrent connections, web infrastructure – this makes a measurable difference.
It’s not enabled by default. Enable it on any VM where network performance matters.
Plan Your Oracle Database Storage Strategy
A VMware-to-OLVM migration is a natural point to think carefully about your Oracle Database storage architecture. The key consideration is whether your database storage sits inside or outside OLVM’s storage domains – and that decision has a direct impact on your migration approach.
Storage inside OLVM storage domains means your Oracle datafiles live on virtual disks managed by OLVM. This is operationally simpler – OLVM handles the storage lifecycle, snapshots, and VM portability. But it also means migrating those datafiles requires some form of copy or conversion as part of the migration process, which adds time and complexity to your cutover window.
Storage outside OLVM – presented directly to the guest via iSCSI LUNs, Fibre Channel, or NFS – enables quick and easy connection into the OLVM environment. The guest VM moves to OLVM and reconnects to the same storage it was using before. For Oracle ASM configurations or large databases where minimising migration downtime is critical, this approach significantly simplifies the transition.
Two options worth considering for externally managed Oracle storage:
dNFS (Direct NFS): Oracle’s built-in NFS client bypasses the operating system’s NFS layer entirely, providing direct access to NFS storage from the database kernel. This eliminates OS-level NFS overhead and gives Oracle more control over I/O operations. If you’re running Oracle Database on NFS storage, dNFS typically provides better throughput and more predictable latency than kernel NFS.
Direct-attached LUNs: Presenting iSCSI or Fibre Channel LUNs directly to the guest VM is particularly relevant for Oracle ASM configurations where the database manages its own storage striping and redundancy.
Your underlying storage platform will also influence this decision. Different storage arrays and NAS appliances have different strengths – some handle iSCSI better, others are optimised for NFS, and some offer features like thin provisioning or replication that may factor into your migration planning. Take careful consideration of what capabilities your storage platform offers and how they align with OLVM’s storage domain options before committing to an approach.
When choosing between iSCSI and NFS for OLVM storage domains, don’t assume one is better than the other. The right choice depends on your existing infrastructure and workload patterns. Test and validate performance with your actual workloads before committing – we’ve seen cases where the expected winner wasn’t.
Want a straight answer on whether OLVM fits your environment? We work with organisations running production Oracle workloads on VMware who need to understand their options before renewal deadlines hit. Most assessments take 30 minutes. Book an assessment ?
RAC on VMware: There’s No Simple Migration Path
This is the one that catches people off guard.
If you’re running Oracle RAC on VMware using shared VMDKs for the ASM disk groups, there is no direct conversion path from shared VMDK to OLVM virtual disk formats. You can’t just run virt-v2v and expect your RAC cluster to appear on the other side.
You have several options, and the right one depends on your environment:
Data Guard standby: Build the RAC cluster fresh on OLVM, configure Data Guard replication from VMware to OLVM, and switch over. This is the cleanest approach for production environments – you get a fully tested new environment and a controlled cutover with minimal downtime. You can also use a restore-based approach with RMAN full and incremental backups to roll forward a copy of the database on the target environment, which avoids the overhead of maintaining a live Data Guard configuration during the transition.
RMAN backup and restore: Back up on VMware, restore on OLVM. Straightforward but requires a maintenance window sized for a full restore. For large databases, this window can be significant.
RMAN recover copy on NFS: Use RMAN’s incremental merge capability to maintain an up-to-date copy on shared NFS storage accessible from both platforms. This reduces the final cutover window to the time needed for one last incremental apply. The migration then involves reattaching the NFS mounts to the new OLVM-hosted guests – the datafiles are already where they need to be.
For large databases, you may need to use a combination of Oracle features as intermediary steps – online datafile move, ASM disk add/remove – to progressively migrate storage to the target environment rather than attempting a single cutover. Plan for this complexity upfront rather than discovering it mid-migration.
Each approach has trade-offs around downtime, complexity, and the amount of parallel infrastructure required. The key point is: don’t discover this limitation during migration planning. Know it going in and design your approach accordingly.
Backup Integration: Veeam-Specific Considerations
If you’re using Veeam for VM-level backups on OLVM, there are a few things to know – and an important scope clarification first.
Veeam is for backing up virtual machine disks – operating system volumes, application server disks, web server VMs, and other general-purpose workloads. It is not a substitute for RMAN when it comes to Oracle Database files. Oracle datafiles, archive logs, and control files should be backed up using RMAN – that’s what it’s designed for, and it understands Oracle’s consistency requirements in ways that a VM-level snapshot tool cannot. If your database storage is on guest-managed iSCSI or NFS (as recommended), Veeam won’t even see those disks.
With that scope in mind, for the VM disks that Veeam does manage:
Check your virtual disk format before you migrate. Many of the default migration methods and tools will create virtual disks in raw format. Veeam requires qcow2 disk format for incremental backups – if your VM disks end up in raw format, incremental backups won’t work and you’ll be running full backups every time, with obvious implications for backup windows and storage consumption. Converting disk format after migration can be time-consuming, so always check what output format the migration tools are writing before you start moving guests.
Scheduling goes through OLVM Engine. Veeam drives all backup scheduling through the OLVM Engine, which is different from VMware where Veeam communicates directly with vCenter and ESXi hosts. In large environments with multiple clusters, this creates a bottleneck – all backup operations funnel through the Engine.
For large multi-cluster environments, consider the trade-off of running a self-hosted Engine versus a dedicated Engine VM. Also consider delegating backup operations to guest-level agents for clustered workloads rather than relying entirely on VM-level snapshots through the Engine.
Operational Lessons
Separate Test and Production Clusters
If your environment is large enough to justify it, seriously consider separate OLVM clusters for test and production. This isn’t just about workload isolation – it’s about having a safe place to test Linux and OLVM updates before they hit production hosts.
For critical security vulnerabilities, Oracle Ksplice can apply patches to KVM hosts without requiring a reboot – this covers the urgent cases. But standard host updates and kernel upgrades still require maintenance windows, which means live-migrating all VMs off the host first. Having a test cluster where you can validate these updates against representative workloads before rolling them into production reduces the risk of unexpected issues during maintenance.
The Engine Runs on PostgreSQL
OLVM Engine uses a PostgreSQL database to store all configuration, history, and operational data. As your environment grows, this database needs attention – maintenance, backups, tuning.
For small environments, the default configuration is adequate. For larger deployments, treat the Engine’s PostgreSQL instance as you would any production database: scheduled maintenance, monitored performance, regular backups, and tested restores.
The Engine is your single point of management for the entire OLVM environment. If it fails without a working backup, you’re managing KVM hosts individually until it’s rebuilt, which is an activity you want to avoid at all costs.
Start Simple, Layer Complexity
Don’t try to replicate your entire VMware environment’s sophistication on day one. Start with basic OLVM functionality – VM deployment, live migration, HA, standard networking – and become operationally comfortable with the platform before adding complexity.
Layer on advanced scheduling policies, complex storage configurations, and automation after your team has a solid foundation. Trying to implement everything simultaneously during migration leads to troubleshooting problems where you can’t distinguish “platform issue” from “configuration issue” from “we don’t understand how this works yet.”
Automate with Ansible
Oracle provides Ansible playbooks for common OLVM operational procedures. Use them.
Ansible integration with OLVM is mature and well-documented. Automating routine operations – VM provisioning, host maintenance, storage management – reduces manual errors and creates repeatable processes that survive staff changes.
If you’re coming from VMware’s PowerCLI scripts, the transition to Ansible is manageable. The concepts are similar; the syntax is different but not complex.
The Bottom Line
VMware-to-OLVM migration is not a weekend project, but it’s not a multi-quarter transformation either. The core platform migration is straightforward. What takes time is the preparation – updating guests, planning storage architecture, building backup integration, and developing operational familiarity.
The environments where we’ve seen the best outcomes are the ones where the team invested time in understanding OLVM before migrating production workloads. The ones where it got difficult were the ones that treated it as a simple lift-and-shift.
Take the time to do it properly. The platform is solid. The migration is manageable. The savings are real.
Need help with OLVM?
Whether you’re planning a VMware-to-OLVM migration, already running OLVM and need help optimising it, or looking for experienced hands to implement a new deployment – a 30-minute call is usually enough to assess your environment and work out the best path forward.
Book a 30-Minute Call ?
