Migrating Virtual Machines from VMware to Proxmox

Contents

What to Fix After Migration — and Best-Practice Proxmox VM Settings

Migrating virtual machines from VMware to Proxmox is usually straightforward: convert the disk, boot the VM, and confirm it runs.
But a VM that boots is not necessarily a VM that is correct, stable, or future-proof.

This article documents real-world migration issues and the correct cleanup process after moving Linux VMs (CentOS, Rocky, Alma, RHEL) from VMware to Proxmox.

It covers:

  1. What typically breaks or degrades after VMware → Proxmox migration
  2. How to fix those issues safely
  3. Best-practice Proxmox VM hardware and options
  4. Why migration cleanup and best practices are inseparable

1. Why VMware → Proxmox VMs Always Need Cleanup

During migration, Proxmox must accommodate VMware-specific hardware assumptions:

  • LSI Logic or SATA storage controllers
  • VMware-generated MAC addresses
  • Generic CPU models
  • Legacy boot orders
  • Host-specific initramfs images

To ensure the VM boots at all, disks are often attached temporarily as SATA or LSI, and MAC addresses may be preserved.

This is expected — but none of these should remain long-term.


2. Common Post-Migration Problems (and Why They Happen)

2.1 Legacy Storage Controllers (LSI / SATA)

Symptoms

  • VM works but disk I/O is slow
  • Higher CPU overhead
  • Old emulated hardware still present

Why

  • LSI 53C895A and SATA are fully emulated
  • They exist only for compatibility

Fix

  • Convert disks to VirtIO SCSI single

2.2 CentOS 8 / Rocky 8 Initramfs Issues (dracut shell)

Symptoms

  • VM boots into dracut:/#
  • No disks visible
  • VirtIO drivers missing

Why

  • Many VMware-origin images use host-only initramfs
  • initramfs was built before VirtIO hardware existed

Correct Fix

  • Boot once using SATA/LSI
  • Rebuild initramfs to include VirtIO
mkdir -p /etc/dracut.conf.d

cat > /etc/dracut.conf.d/virtio.conf <<'EOF'
add_drivers+=" virtio_pci virtio_scsi virtio_blk virtio_ring virtio_net "
EOF

dracut -f
  • Verify VirtIO drivers are embedded (IMPORTANT) Your output should look something like this.
virtio_scsi.ko.xz
virtio_blk.ko.xz
virtio_net.ko.xz
  • Then convert storage controller

3. Recommended Safe Migration Workflow (Rocky / Alma / RHEL 8+)

This process avoids the CentOS-style failure entirely and works reliably.

Step 1 — Boot the migrated VM (still on SATA or LSI)

Confirm the OS boots cleanly.


Step 2 — Proactively rebuild initramfs (recommended)

Even though Rocky 8+ usually includes VirtIO drivers by default, rebuilding ensures consistency.

dracut -f

This guarantees VirtIO drivers are embedded and prevents future surprises.


Step 3 — Verify VirtIO support

lsinitrd | grep virtio_scsi

If output is present, the VM is safe to convert.


Step 4 — Shut down the VM


Step 5 — Convert storage to VirtIO SCSI

In Proxmox:

  • Disable Protection Mode
  • Detach disk (do not delete)
  • Set SCSI Controller → VirtIO SCSI single
  • Reattach disk as scsi0
  • Fix Boot Order → scsi0

Step 6 — Boot and verify

The VM should boot cleanly with no dracut shell.


4. VMware MAC Addresses: A Critical Migration Gotcha

The rule

VMware MAC addresses should not be reused in Proxmox.


Why this matters

VMware MAC addresses:

  • Use VMware-specific OUI ranges
  • Are not guaranteed to be unique outside VMware
  • Can confuse Linux udev, NetworkManager, and systemd

After migration this can cause:

  • Interface renaming (ens192ens224)
  • Network not coming up
  • Duplicate MAC conflicts on Proxmox bridges
  • Static IPs silently failing

Correct fix (simple but important)

After importing a VM:

  1. Hardware → Network Device
  2. Set MAC Address → Auto
  3. Ensure Model → VirtIO
  4. Boot the VM

This forces Proxmox to generate a clean, unique MAC and prevents networking issues.


When not to use Auto (rare)

  • MAC-licensed software
  • Legacy appliances with hardcoded rules

In those cases, manually assign a new MAC — do not reuse the VMware one.


5. Proxmox VM Best-Practice Hardware Settings

CPU

Type:      host
Sockets:   1
Cores:     2–8 (as needed)
NUMA:      off (unless large VM)


BIOS / Machine

BIOS:      SeaBIOS
Machine:   i440fx


Storage

Controller:     VirtIO SCSI single
Bus:            SCSI (scsi0)
Cache:          No cache
IO Thread:      Enabled
Async IO:       io_uring


SSD Emulation (important distinction)

  • Enable if backend is SSD / NVMe / SSD-backed Ceph
  • Disable if backend is spinning SAS/SATA (even on ZFS)

ZFS does not make HDDs behave like SSDs.


Networking

Model: VirtIO
MAC:   Auto (Proxmox-generated)


6. Options Tab Essentials

  • Boot Order: scsi0
  • QEMU Guest Agent: Enabled
  • Protection: Disabled during maintenance, enabled afterward
  • Hotplug: Disk / Network only
  • RTC Local Time: Disabled for Linux

7. Why Migration Fixes and Best Practices Are Linked

Migration often leaves behind:

  • Temporary compatibility hardware
  • Legacy controllers
  • Unsafe CPU topology
  • Invalid MAC addresses

Cleaning these up is mandatory if you want:

  • Predictable performance
  • Clean upgrades
  • Reliable backups
  • Stable networking

Migration is step one. Optimization is step two.


Final Takeaway

SATA, LSI, VMware MACs, and generic CPUs are bridges — not destinations.
VirtIO, host CPU, clean initramfs, and Proxmox-generated MACs are the end state.

If you do this cleanup once, your Proxmox cluster will behave like it was built natively — because effectively, it now is.