iSCSI with Hyper-V
Previously, I talked about assigning storage to the virtual machine in the form of a
virtual hard disk, which required the Hyper-V host to connect to the storage and then
create the VHDX files on it. There are, however, other ways to present storage to
virtual machines.
iSCSI is a popular alternative to Fibre Channel connectivity that allows block-level
connectivity to SAN storage using the existing network infrastructure instead of
requiring a completely separate fabric (cards, cables, switches) just for storage. iSCSI
works by carrying the traditional SCSI commands over IP networks. While it is
possible to run iSCSI over the existing network infrastructure, if iSCSI is being used as
the primary storage transport, it is common to have a dedicated network connection
for iSCSI to ensure the required bandwidth or, ideally, to leverage larger network
connections such as 10Gbps and use QoS to ensure that iSCSI gets the required
amount of bandwidth.
In addition to using iSCSI on the Hyper-V host to access storage, it can also be
leveraged within virtual machines as a means to provide storage that is accessible to
the virtual machine. This includes storage that could be accessed by multiple virtual
machines concurrently, known as shared storage, which is required in many scenarios
in which clusters are implemented within virtual machines, known as guest
clustering. If you intend to leverage iSCSI within virtual machines, it is a good idea to
have dedicated networking for iSCSI. This requires creating a separate virtual switch
on the Hyper-V hosts connected to the adapters allocated for iSCSI and then creating
an additional network adapter in the virtual machines connected to the virtual switch.
If the iSCSI communication is important to the business, you may want to implement
redundant connectivity. This is accomplished by creating multiple virtual switches
connected to various network adapters, creating multiple virtual network adapters in
the virtual machines (connected to the virtual switches), and then using MPIO within
the virtual machine. I talk more about MPIO in the section “Understanding Virtual
Fibre Channel.” Do not use NIC Teaming with iSCSI because it’s not supported, except
in one scenario.
If you have a shared NIC scenario (as discussed in Chapter 3), which uses separate
network adapters that are teamed together via the Windows Server NIC Teaming
solution (it must be the Windows in-box NIC Teaming solution), and the NIC team
then has multiple virtual network adapters created at the host level for various
purposes, one of which is for iSCSI, NIC Teaming is supported. But this is the only
time it can be used with iSCSI. If you have dedicated network adapters for iSCSI, then
use MPIO.
There are two parts to iSCSI: the iSCSI Initiator, which is the client software that
allows connectivity to iSCSI storage, and the iSCSI target, which is the server software.
The iSCSI Initiator has been a built-in component of Windows since Windows Server