Our previous article in this series explained the main components of a Windows Server 2012 R2 failover cluster, the quorum configuration options and the shared storage preparation. This article expands on the requirements to implement failover clustering on Windows Server 2012 R2, describes the step-by-step process to connect the servers to shared storage, and the installation of a Windows Server 2012 R2 failover cluster. After the cluster is created, Windows PowerShell is used to demonstrate a generic application role configuration.

MCSE Training – Resources (Intense)

Requirements and Recommendations for a Successful Failover Cluster Implementation

A Windows Server 2012 R2 failover cluster can have from two to 64 servers, also known as nodes. Once configured, these computers work together to increase the availability of applications and services. However, the requirements for a failover cluster configuration are more stringent than any other Windows Server network service that you may manage.

Let’s review some of the most important limitations:

  • It is recommended to install similar hardware on each node.
  • You must run the same edition of Windows Server 2012 or Windows Server 2012 R2. The edition can be Standard or Datacenter, but they cannot be mixed in the same cluster.
  • Equally important is to configure the cluster with all nodes as Server Core or Full installation but not both.
  • Every node in the cluster should also have similar software updates and service packs.
  • You must include matching processor architecture on each cluster node. This means that you cannot mix Intel and AMD processors families on the same cluster.
  • When using serial attached SCSI or Fibre Channel storage, the controllers or host bus adapters (HBA) should be identical in all nodes. The controllers should also run the same firmware version.
  • If Internet SCSI (iSCSI) is used for storage, each node should have at least one network adapter or host bus adapter committed exclusively to the cluster storage. The network dedicated to iSCSI storage connections should not carry any other network communication traffic. It is recommended to use a minimum of 2 network adapters per node. Gigabit Ethernet (GigE) or higher is strongly suggested for better performance.
  • Each node should have installed identical network adapters that support the same IP protocol version, speed, duplex, and flow control options.
  • The network adapters in each node must obtain their IP addresses using the same consistent method, either they are all configured with static IP addresses or they all use dynamic IP addresses from a DHCP server.
  • Each server in the cluster must be a member of the same Active Directory domain and use the same DNS server for name resolution.
  • The networks and hardware equipment use to connect the servers in the cluster should be redundant, so that the nodes will maintain communication with one another after a single link fails, a node crashes, or a network device malfunctions.
  • In order to access Microsoft support, all the hardware components in your cluster should bear the “Certified for Windows Server 2012” logo and they must pass the “Validate a Configuration” Wizard test. More on this later in the article.

Connecting the Servers to Shared Storage

Our lab for this demonstration uses two physical Windows Server 2012 R2 nodes name ServerA1, and ServerA2. Before installing the failover clustering feature, let’s connect the servers to the iSCSI target which contains the shared storage that was created in the first article of this series. Starting with ServerA1, here are the steps:

  1. In the Server Manager, click Tools, and then click the iSCSI Initiator. If prompted, click Yes in the Microsoft iSCSI dialog box.

  2. In the iSCSI Initiator Properties, click the Discovery tab and then click Discover Portal.

  3. In the Discover target Portal page, In the IP address or DNS name box, type, and then click OK. This is the IP address of the iSCSI Target server.

  4. Click the Targets tab, click Refresh, select iqn.1991-05.com.microsoft:dc1-isan-target, and then click Connect.

  5. In the Connect to Target box, make sure that Add this connection to the list of Favorite Targets is selected, click OK.

  6. In the iSCSI Initiator Properties, verify that the Status is Connected and click OK.

Steps 1 through 6 must also be executed on ServerA2 so that both servers can have access to the shared storage available from the iSCSI Target Server.

Next, let’s configure the volumes using Disk Management on ServerA1.

  1. In the Server Manager, click Tools, and then click Computer Management.

  2. Expand Storage, then click Disk Management and verify that you have three new disks that need to be configured. These are the iSCSI Target disks.

  3. Right-click Disk 9, and then click Online.

  4. Right-click Disk 9, and then click Initialize disk. In the Initialize Disk dialog box, click OK.

  5. Right-click the unallocated space next to Disk 9, and then click New Simple Volume.

  6. On the Welcome page, click Next.

  7. On the Specify Volume Size page, click Next.

  8. On the Assign Drive Letter or Path page, click Next.

  9. On the Format Partition page, in the Volume Label box, type CSV. Select the Perform a quick format check box, and then click Next.

  10. Click Finish.

Repeat steps 1 through 10 for Disks 10 and 11. For disk 10 change the label to Data and for Disk 11 change the label to Witness. If you run your own lab, the disks numbers are likely to be different, but the steps are identical. Once all the steps are completed on ServerA1, you need to go to ServerA2 and from Disk Management right click on each disk and bring them online.

Both servers should show the disks configured as the figure below.

Installing the Windows Server 2012 R2 Failover Clustering Feature

Now that both servers are connected to the shared storage, the next phase is to install the failover clustering feature on ServerA1 and ServerA2 using either Windows PowerShell or Server manager.

The process is exactly the same on both servers, so let’s demonstrate it on ServerA1.

  1. Using Windows PowerShell verify that the Failover clustering feature is not installed on the server by running the following command:
  • Get-WindowsFeature Failover-Clustering | FT –Autosize

  1. To install the Failover clustering feature, from PowerShell run this command:
  • Install-WindowsFeature Failover-Clustering –IncludeManagementTools

Validating the Servers for Failover Clustering

Once the failover clustering feature is installed on both servers, running the wizard to validate the servers for failover clustering allows you to generate a detailed report indicating possible areas that may need to be fixed before creating the cluster. Let’s run the Validate a Configuration Wizard from ServerA1.

  1. In the Server Manager, click Tools, and then click Failover Cluster Manager.

  2. In the Actions pane of the Failover Cluster Manager, click Validate Configuration.

  3. In the Validate a Configuration Wizard, click Next.

  4. In the Select Servers or a Cluster, next to the Enter Name box, type ServerA1, and then click Add.

  5. In the Enter Name box, type ServerA2 and then click Add,

  6. Verify that ServerA1 and ServerA2 are shown in the Selected servers box and click Next.

  7. Verify that Run all tests (recommended) is selected, and then click Next.

  8. On the Confirmation page, click Next.

  9. Wait for the validation tests to finish. This may take several minutes. On the Summary page, click View Report. It is recommended that you keep this report for future references.

  10. Verify that all tests are completed without errors. You can click on areas of the report to find out more details on the configurations that show warnings.

  11. On the Summary page, click to remove the checkmark next to Create the cluster now using the validated nodes, and click Finish.

Creating the Failover cluster

Even though there were some warnings, the servers did pass the validation test, so we can proceed to create our cluster now. The following steps will be executed using Failover Cluster manager on ServerA1, but either node would be fine to complete this process.

  1. In the Failover Cluster Manager, in the center pane, under Management, click Create Cluster.

  2. On the Before You Begin page of the Create Cluster Wizard, read the information and click Next.

  3. In the Enter server name box, type ServerA1, ServerA2 and then click Add.

  4. Verify the entries, and then click Next.

  5. In Access Point for Administering the Cluster, in the Cluster Name box, type ClusterA. Under Address, type, and then click Next.

  6. In the Confirmation dialog box, verify the information, and then click Next.

  7. On the Summary page, confirm that the cluster was successfully created and click Finish to return to the Failover Cluster Manager.

After the Create Cluster Wizard is done, you can verify that a computer object with the cluster’s name has been created in Active Directory. See figure below.

Also, a host name is automatically registered in DNS for the new cluster. See figure below.

The failover cluster feature predefines specific roles that can be configured for failover protection, including DFS Namespace server, DHCP Server, File Server, iSCSI Target Server, WINS Server, Hyper-V Replica Broker and Virtual Machines. It is possible to cluster applications and services that are not clustered aware by using the available Generic application or Generic Service role respectively. The figure below shows the roles representing services and applications that can be configured for high availability.

Either the Failover Cluster Manager or Windows PowerShell can be used to configure these roles. The following code provides an example of applying the Generic Application role using Windows PowerShell.

notepad.exe `


The following command can be used to verify that the generic application is online:

Get-ClusterResource “notepad application” | fl

Failover Cluster Manager also shows that the generic application is up and running. See the figure below.

Failover Clustered File Server Options

Windows server 2012 R2 supports two different clustered file servers’ implementations: Scale-Out File Server for application data and File Server for general use.

Scale-Out File Server for Application Data

It is also known as an active-active cluster; this feature was introduced in Windows Server 2012 and it is the recommended clustered file server option to deploy Hyper-V nodes and Microsoft SQL servers over Server Message Block (SMB). This high performance solution allows you to store server application data on file shares that are concurrently available online on all nodes. Because the aggregated bandwidth from all the nodes is now the maximum cluster bandwidth, the performance boost can be very significant. You can increase the total bandwidth by bringing additional nodes into the cluster. These scale-out files shares require SMB 3.0 or higher and they are not available in any version of Windows Server previous to Windows Server 2012.

File Server for General Use

This is the traditional failover clustering solution that has been available on previous versions of Windows Server in which only one node is available at a time in an active-passive configuration. It supports some important features that cannot be implemented on Scale-Out File Servers like data deduplication, DFS replication, dynamic access control, work folders, NFS shares, branchcache and File Server Resource Manager screen and quota management.

Closing Remarks

Installing the Windows Server 2012 R2 failover clustering feature has some strict hardware and software requirements. This article demonstrates how to connect the cluster nodes to shared storage, how to create a cluster and configure a generic application role using Windows PowerShell. There is more to do now that the cluster is up and running as we can configure additional services and applications for failover protection. After all, that is the whole idea of setting up the cluster.

Our next and final article in this series will walk through the configuration of a highly available file server. And saving the best for last, you will see the implementation of cluster shared volumes (CSV) and how they are used on a Hyper-V cluster to provide failover protection in a virtualized environment. Live migration will be tested to validate the functionality of the Hyper-V cluster.