Providing high availability for applications and services is one of the most critical responsibilities that IT administrators have in today’s data centers. Planned or unplanned downtime may cause businesses to lose money, customers, and reputation.

Highly available systems demand the implementation of fault-tolerant processes and operations that minimize interruptions by eliminating single point of failures and detecting failures as they happen. This is what failover clustering is all about. Our first article dedicated to Windows Server 2012 R2 failover clustering describes the main components of a failover cluster implementation, the quorum configuration options and the shared storage preparation.

MCSE Training – Resources (Intense)

Main Components of a Failover Cluster

When configuring a Windows Server 2012 R2 failover cluster, it is essential to carefully consider the main components that will integrate the cluster configuration. Let’s review the most important ones:

  • Nodes. These are the member servers of a failover cluster. This collection of servers communicate with each other and run cluster services, resources, and applications associated with a cluster.
  • Networks. Refers to the networks that cluster nodes use to communicate with one another, the clients, and the storage. Three different networks can be configured to provide enhanced functionality to the cluster:
  • Private network: Dedicated to internal cluster communication. It is used by the nodes to exchange heartbeats and interact with other nodes in the cluster. The failover cluster authenticates all internal communication.
  • Public network: This network allows network clients access to cluster applications and services. It is possible to have a mixed public and private network, although it is not recommended as bottleneck and contention issues may strain the network connections.
  • Storage network: These are dedicated channels to shared storage. iSCSI storage requires special attention because it uses the same IP protocol and Ethernet devices available to the other networks. However, the storage network should be completely isolated from any other network in the cluster. Configuring redundant connections on all these networks increases cluster resilience.
  • Storage. This is the cluster storage system that is typically shared between cluster nodes. The failover cluster storage options on Windows Server 2012 R2 are:
  • iSCSI: The iSCSI protocol encapsulates SCSI commands into data packets that are transmitted using Ethernet and IP protocols. Packets are sent over the network using a point-to-point connection. Windows Server 2012 supports implementing iSCSI target software as a feature. Once the iSCSI target is configured, the cluster nodes can connect to the shared storage using the iSCSI initiator software that is also part of the Windows Server 2012 operating system. Keep in mind that, in most production networks with high loads, system administrators will opt for hardware iSCSI host bus adapters (HBA) over software iSCSI.
  • Fiber channel:
    Fiber channel SANs typically have better performance than iSCSI SANs, but are much more expensive. Specialized hardware and cabling are needed, with options to connect point-to-point, switched, and loop interfaces.
  • Shared serial attached SCSI: Implementing shared serial attached SCSI requires that two cluster nodes be physically close to each other. You may be limited by the number of connections for cluster nodes on the shared storage devices.
  • Shared .vhdx: Use with virtual machine guest clustering. A shared virtual hard drive should be located on a clustered shared volume (CSV) or scale-out file server cluster. From there, it can be added to virtual machines participating in a guest cluster by connecting to the SCSI interface. Vhd drives are not supported.
  • Services and applications. These represent the components that the failover cluster protects by providing high availability. Clients access services and applications and expect them to be available when needed. When a node fails, failover moves services and applications to another node to ensure that those clustered services and applications continue to be available to network clients.

Server 2012 R2 Failover Clustering Quorum

Quorum defines the minimum number of nodes that must participate concurrently on the cluster to provide failover protection. Each node casts a vote and if there are enough votes, the quorum can start or continue running. When there is an even number of nodes, the cluster can be configured to allow an additional witness vote from a disk or a file share. Each node contains an updated copy of the cluster configuration that includes the number of votes that are required for the cluster to function properly.

There are four quorum modes in Windows Server 2012:

Node majority

Each node that is online and connected to the network represents a vote. The cluster operates only with a majority, or more than half of the votes. Node majority is recommended for clusters with an odd number of servers.

Node and disk majority

Each node that is online and connected to the network represents a vote, but there is also a disk witness that is allowed to vote. The cluster runs successfully only with a majority, that means, when it has more than half of the votes. This configuration relies on the nodes being able to communicate with one another in the cluster, and with the disk witness. It is recommended for clusters with an even number of nodes.

Node and file share majority

Each node that is online and connected to the network represents a vote, but there is also a file share that is allowed to vote. As in previous modes, the cluster operates only with a majority of the votes. This mode works in a similar way to node and disk majority but, instead of a disk witness, the cluster uses a file share witness.

No majority: disk only

The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage. Only the nodes that are also in communication with that disk can join the cluster. This represents a single point of failure and it is the least desirable option.

On Windows Server 2012, the installation wizard by default automatically selects the quorum mode during the installation process. Once the failover cluster installation completes, you will have either one of these two modes:

  • Node majority: if there is an odd number of nodes in the cluster.
  • Node and disk majority: if there is an even number of nodes in the cluster.

At any time you can switch to a different mode to accommodate changes in your network and cluster arrangement.

Windows Server 2012 R2 Dynamic Quorum

Windows Server 2012 R2 introduces significant changes to the way cluster quorum functions. When you install a Windows Server 2012 R2 failover cluster, dynamic quorum is selected by default. This process defines the quorum majority based on the number of nodes in the cluster and configures the disk witness vote dynamically as nodes are added or remove from the cluster. If a cluster has an odd number of votes, a disk witness will not have a vote in the cluster; with an even number, a disk witness will have a vote. In other words, the cluster automatically decides whether to use the witness vote based on the number of voting nodes that are available in the cluster. Dynamic quorum allows a cluster to recalculate quorum when a node fails in order to keep the cluster running successfully, even when the number of nodes remaining in the cluster drops below 50 percent of the initial configuration. Another benefit of the dynamic quorum is that, when you add or evict nodes from the cluster, there is no need to change the quorum settings manually. The previous quorum modes that require manual configuration are still available, in case you feel some nostalgia for the old methodology.

Windows Server 2012 R2 also allows you to start cluster nodes that do not have a majority by using the “force quorum resilience” feature. This can be used when a cluster breaks into subsets of cluster nodes that are not aware of each other, a situation that is also known as split brain syndrome cluster scenario.

Using Windows Server 2012 R2 iSCSI Target

For shared storage, our demonstration lab uses the iSCSI Target feature on Windows Server 2012 R2. To verify the status of the iSCSI feature on a Windows Server 2012, from Windows PowerShell run the following command:

  • Get-WindowsFeature FS-iSCSITarget-Server

The above figure shows that the iSCSI Target has not been installed on the server yet. To install the iSCSI target feature, run the following Windows PowerShell command:

  • Install-WindowsFeature FS-iSCSITarget-Server

Configuring the iSCSI targets

After the iSCSI has been installed, you can go to Server Manager to complete the configuration. Here are the steps:

  1. In the Server Manager, in the navigation pane, click File and Storage Services.

  2. In the File and Storage Services pane, click iSCSI.

  3. In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list box, click New iSCSI Virtual Disk.

  4. In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk location page, under Storage location, click drive E, and then click Next.

  5. On the Specify iSCSI virtual disk name page, in the Name text box, type iLUN0, and then click Next.

  6. On the Specify iSCSI virtual disk size page, in the Size text box, type 500; in the drop-down list box, if necessary switch to GB, select Dynamically expanding and then click Next.

  7. On the Assign iSCSI target page, click New iSCSI target, and then click Next.

  8. On the Specify target name page, in the Name box, type iSAN, and then click Next.

  9. On the Specify access servers page, click Add.

  10. In the Select a method to identify the initiator dialog box, click Enter a value for the selected type, in the Type drop-down list box, click IP Address, in the Value text box, type 192.168.1.200, and then click OK.

  11. On the Specify access servers page, click Add.

  12. In the Select a method to identify the initiator dialog box, click Enter a value for the selected type; in the Type drop-down list box, click IP Address; in the Value text box, type 192.168.1.201, and then click OK.

  13. On the Specify access servers page, confirm that you have two IP addresses. These correspond to the two cluster nodes that will be using their iSCSI initiators to connect to the shared storage. Click Next.

  14. On the Enable Authentication page, click Next.

  15. On the Confirm selections page, click Create.

  16. On the View results page, wait until creation completes, and then click Close.

  17. In the iSCSI VIRTUAL DISKS pane, click TASKS, and then in the TASKS drop-down list box, click New iSCSI Virtual Disk.

  18. In the New iSCSI Virtual Disk Wizard, on the Select iSCSI virtual disk location page; under Storage location, click drive E, and then click Next.

  19. On the Specify iSCSI virtual disk name page, in the Name box, type iLUN1, and then click Next.

  20. On the Specify iSCSI virtual disk size page, in the Size box, type 300; in the drop-down list box, if necessary, switch to GB, select Dynamically expanding, and then click Next.

  21. On the Assign iSCSI target page, click iSAN, and then click Next.

  22. On the Confirm selection page, click Create.

  23. On the View results page, wait until the new iSCSI virtual disk is created, and then click Close.

Repeating steps 17 through 23, another 1GB iSCSI virtual hard disk has been created to be used as the disk witness in the failover cluster. The three drives are shown in the figure below.

Closing Remarks

Failover clustering is a critical technology to provide high availability of services and applications. This article introduced the Windows Server 2012 R2 failover clustering components and the quorum configuration modes. It also illustrated the implementation of the iSCSI Target feature to provide the shared storage for a failover cluster. Our next article will demonstrate step by step how to connect the servers to the shared storage and how to install and configure Windows Server 2012 R2 failover clustering.