iSCSI in the LAB

I sometimes run internal Windows Failover Clustering training, and I am sometimes asked “How can I test this at home when I do not have a SAN?”. As you may know, even though clusters without shared storage are in deed possible in the current year, some cluster types still rely heavily on shared storage. When it comes to SQL Server clusters for instance, a Failover Cluster instance (which relies on shared storage) is a lot easier to operate compared to an AOAG cluster which does not rely on shared storage. There are a lot of possible solutions to this problem, you could for instance use your home NAS as an iSCSI “SAN”, as many popular NAS boxes have iSCSI support. In this post however, I will focus on how to build a Windows Server 2019 vm with an iSCSI target for LAB usage. This is NOT intended as a guide for creating a production iSCSI server. It is most definitely a bad iSCSI server with poor performance and not suited for anything requiring production level performance.

“What will I need to do this?” I hear you ask. You will need a Windows Server 2019 VM with some spare storage mounted as a secondary drive, a domain controller VM and some cluster VMs. You could also use a physical machine if you want, but usually this kind of setup involves one physical machine running a set of VMs. In this setup I have four VMs:

  • DC03, the domain controller
  • SQL19-1 and SQL19-2, the cluster nodes for a SQL 2019 CTP 2.2 failover cluster instance
  • iSCSI19, the iSCSI server.

The domain controller and SQL Servers are not covered in this post. See my Failover Cluster series for information on how to build a cluster.

Action plan

  • Make sure that your domain controller and iSCSI client VMs are running.
  • Make sure that Failover Cluster features are installed on the client VMs.
  • Enable the iSCSI service on your cluster nodes that are going to use the iSCSI server. All you have to do is to start the iSCSI initiator program, it will ask you to enable the iSCSI service:
  • clip_image001

  • Create a Windows 2019 Server VM or physical server.
  • Add it to your lab domain.
  • Set a static IP, both v4 and v6. This is important for stability, iSCSI does not play well with DHCP. In fact, all servers involved in this should have static IPs to ensure that the iSCSI storage is reconnected properly at boot.
  • Install the iSCSI target feature using powershell.
    • Install-WindowsFeature FS-iSCSITarget-Server
  • Add a virtual hard drive to serve as storage for the iSCSI volumes.
  • Initialize the drive, add a volume and format it. Assign the drive letter I:

  • clip_image002

  • Open Server Manager and navigate to File and Storage Services, iSCSI
  • Start the “New iSCSI virtual disk wizard”
  • Select the I: drive as a location for the new virtual disk

  • clip_image003

  • Select a name for you disk. Here, I have called it SQL19_Backup01
  • clip_image004

  • Set a disk size in accordance with your needs. As this is for a LAB environment, select a Dynamically expanding disk type.
  • clip_image005

  • Create a new target. I used SQL19Cluster as the name for the target.
  • Add the cluster nodes as initiators. You should have enabled iSCSI on the cluster nodes before this step (see above). The cluster nodes also has to be online for this to be successful.
  • clip_image006

  • As this is a lab we skip the authentication setup
  • clip_image007

  • Done!
  • clip_image008

  • On the cluster nodes, start the iSCSI initator.
  • Input the name of the iSCSI target and click the quick connect button.
  • clip_image009

  • Initialize, online and format the disk on one of the cluster nodes
  • clip_image010

  • Run storage validation in failover cluster manager to verify your setup. Make sure that your disk/disks are part of the storage validation. You may have to add the disk to the cluster and set it offline for validation to work.
  • Check that the disk fails over properly to the other node. It should appear with the same drive letter or mapped folder on both nodes, but only on one node at the time unless you convert it to a Cluster Shared Volume. (Cluster Shared Volumes are for Hyper-V Clusters).