1 . A container VM was created on the target host using the existing VM’s
configuration.
2 . The memory of the VM was copied from the source to the target VM.
3 . Because the VM was still running while the memory was copied, some of the
memory content changed. Those dirty pages were copied over again. This process
repeated numerous iterations, with the number of dirty pages shrinking by a
magnitude each iteration, so the time to copy the dirty pages shrank greatly.
4 . Once the number of dirty pages was very small, the VM was paused and the
remaining memory pages were copied over along with the processor and device
state.
5 . The VM was resumed on the target Hyper-V host.
6 . A reverse unsolicited ARP was sent over the network, notifying routing devices
that the VM’s IP address was moved.
The whole process can be seen in Figure 1.5. You may be concerned about Step 4, the
VM being paused for a copy of the final few pages of dirty memory. This is common
across all hypervisors and is necessary; however, only milliseconds of time are
involved, so it’s too small to notice and well below the TCP connection time-out,
which means no connections to the server would be lost.
Figure 1.5 A high-level view of the Live Migration process
Live Migration solved the problem of pausing the virtual machine to copy its memory
between hosts. It did not, however, solve the problem that NTFS couldn’t be shared, so
the LUN containing the VM had to be dismounted and mounted, which took time. A
second new technology solved this problem: Cluster Shared Volumes, or CSV.
CSV allows an NTFS-formatted LUN to be available simultaneously to all hosts in the
cluster. Every host can read and write to the CSV volume, which removes the need to
dismount and mount the LUN as VMs move between hosts. This also solved the
problem of having to have one LUN for every VM to enable each VM to be moved