Our previous article in this series covered the steps to connect cluster nodes to shared storage, the installation of the Windows Server 2012 failover clustering feature, and the configuration of a cluster role using Windows PowerShell. This article demonstrates the process of deploying and configuring a highly available file server, the implementation of cluster shared volumes (CSV), and how to manage a Hyper-V cluster to provide failover protection to virtual machines.

Deploying and Configuring a Highly Available File Server

Our lab uses a cluster name, ClusterA.abc.com, which consists of two nodes that are identified as ServerA1 and ServerA2.

You must install the file server role service on every cluster node before a highly available file server can be configured on the cluster. For our lab, both ServerA1 and ServerA2 have the file server role service already installed. There are also three disks that have been added to the cluster; one of the disks is the witness quorum and the other two are used for data storage. To deploy the clustered file server, let’s complete the following steps:

  1. In the Failover Cluster Manager, expand ClusterA.abc.com. Expand Storage, and click Disks. Make sure that Cluster Disk 1, Cluster Disk 2, and Cluster Disk 3 are present and online.

  2. Right-click Roles, and then select Configure Role.

  3. On the Before You Begin page, click Next.

  4. On the Select Role page, select File Server, and then click Next.

  5. On the File Server Type page, click File Server for general use, and then click Next.

  6. On the Client Access Point page, in the Name box, type GeneralFS; in the Address box, type 192.168.1.215; and then click Next. GeneralFS will join the Active Directory as a computer object that can be seen in the Computers container of Active Directory Users or the Active Directory Administrative Center. Also, the same name will register on the DNS server with its corresponding IP address.

  7. On the Select Storage page, select the Cluster Disk 2 check box, and then click Next.

  8. On the Confirmation page notice the network name and the name for the organizational unit (OU) where the cluster account will be created, then click Next.

  9. On the Summary page, click Finish.

  10. Under ClusterA.abc.com, click on Roles to confirm that the GeneralFS file server role is up and running. Note that ServerA1 is the GeneralFS role’s Owner Node. However, it is important to test failover protection in order to verify that ServerA2 can also hold the ownership of GeneralFS in case ServerA1 becomes unavailable.

  11. To test failover protection, right-click on GeneralFS, and then click on Move – Select Node.

  12. On the Move Clustered Role box, select ServerA2 and click OK.

  13. Verify that the role failed over to ServerA2, which is now the owner of GeneralFS.

Add a Shared Folder to a Highly Available File Server

Now that the clustered file server has been created, it’s time to add shared folders to further assess the functionality of this highly available solution.

  1. In Failover Cluster Manager, expand ClusterA.abc.com, and then click Roles.

  2. Right-click GeneralFS, and then select Add File Share.

  3. In the New Share Wizard, on the Select the profile for this share page, click SMB Share – Quick, and then click Next.

  4. On the Select the server and the path for this share page, under Server, make sure that GeneralFS is selected and click Next.

  5. On the Specify share name page, in the Share name box, type Reports, and then click Next.

  6. On the Configure share settings page, review the available settings, note that the Enable
    Branchcache on the file share option is not available because the Branchcache for Network Files role service is not installed on the server. Click Next.

  7. On the Specify permissions to control access page, click Next.

  8. On the Confirm selections page, verify the settings assigned to the file share and click Create.

  9. On the View results page, confirm that the share was successfully created and click Close.

Failover and Failback

Failover hands over the authority and responsibility of providing access to resources from one node to another in a cluster. This may happen when a system admin consciously relocates resources to another node to realign loads for maintenance purposes. An unexpected, unplanned downtime could also affect one node due to hardware failure or a network breakdown. Furthermore, service failure on an active node can initiate failover to another node.

The failover process takes all the resources in the instance offline in an order that is defined by the instance’s dependency levels. It always tries dependent resources first, followed by the resources on which they rely on. Let’s say that a service depends on a cluster disk resource, the cluster service takes the service offline first to allow the service to write changes to the disk before the disk is taken offline. After all the resources have been taken offline, the cluster service seeks to resettle the instance to another node, according with the preferred owner’s order listed for that cluster role.

After all the resources are offline, the Cluster service attempts to transfer the clustered role to the node that is listed next on the clustered role’s list of preferred owners. (See the screen shot for Step # 3 on the Configure failover and failback settings lab below). Once the cluster service moves the cluster role to another node, it attempts to bring all the resources online in reverse order from that in which they were taken offline. In our cluster disk and service example, the disk comes online first and then the service. That way, the service will not try to write to a cluster disk that is not available yet. Let’s review the failover and failback settings in the next phase of our lab.

Configure Failover and Failback Settings

  1. In the Failover Cluster Manager, click Roles, right-click GeneralFS, and then click Properties.

  2. Click the Failover tab to configure the number of times that the cluster service should attempt to restart or failover a service or application in a given time period, click the Failover tab and specify values under Failover. By default, a maximum of one failure is allow in a 6-hour period.

  3. Click the General tab. Select both ServerA1 and ServerA2 as preferred owners. Notice that you can move the nodes up or down to indicate your level of preference.

  4. On the Failover tab, click Allow failback. Click Failback between, and set values to 17 and 7 hours to allow failback to occur between 5:00 PM and 7:00 AM, then click OK. Keep in mind that you must configure at least one preferred owner if you want failback to take place.

Validate the Deployment of the Highly Available File Server

To validate the clustered configuration, let’s access the file share to create data and then make the node that owns the clustered file server role unavailable.

  1. From a client computer in the network, open File Explorer, and in the Address bar, type \\GeneralFS and press Enter.

  1. Verify that you can access the Reports folder.

  2. Create a text document inside the Reports folder.

  3. On ServerA1, open the Failover Cluster Manager. Expand ClusterA.abc.com and then click Roles. Note that the current owner of GeneralFS is ServerA2.

  4. Click on Nodes, right-click ServerA2, click More Actions, and then click Stop Cluster Service.

  5. Click on Roles to confirm that GeneralFS failed over to ServerA1.

  6. Switch to the network client computer and verify that you can still access \\GeneralFS\ and the Reports folder data.

Cluster Shared Volume (CSV)

In a traditional Windows failover cluster implementation, multiple nodes cannot access a LUN or a volume on the shared storage simultaneously. CSV enables multiple nodes to share a single LUN at the same time. Each node gains exclusive access to individual files on the LUN instead of the whole LUN. CSVs run as a distributed file access solution that allow multiple nodes in the cluster simultaneously access the same file system on a disk. Only NTFS is supported on Windows Server 2012, but Windows Server 2012 R2 added support for the resilient file system (ReFS). CSVs can only be configured within a failover cluster, after the disks from the shared storage have been added to the cluster. The following steps show how to create a clustered shared volume.

  1. From Cluster Manager, select Disks, right-click on Cluster Disk 1, and select Add to Cluster Shared Volumes.

  2. Verify that Cluster Disk 1 is now assigned to Cluster Shared Volume.

Once the CSV has been created, it can be used to store the highly available virtual machines that will be hosted on the Hyper-V cluster.

Hyper-V Clustering

Hyper-V clustering requires that the cluster nodes be physical computers, this is known as host clustering. In other words, it is not possible to create a Hyper-V cluster using virtual machines as cluster nodes, also referred to as guest clustering.

Implementing host clustering for Hyper-V allows you to configure virtual machines as highly available resources. In this case, the failover protection is set at the host-server level. In consequence, the guest operating system and applications that are running within the virtual machines do not have to be cluster-aware. Nevertheless, the virtual machines are still highly available.

Configuring a Highly Available Virtual Machine

For our lab, the Hyper-V role has already been installed on ServerA1 and ServerA2. For details on installing and configuring the Hyper-V role, see this article.

  1. In the Failover Cluster Manager console, right-click Roles, click Virtual Machines and select New Virtual Machines.

  2. Select ServerA1 as the cluster node, and click OK.

  3. In the New Virtual Machine Wizard, click Next.

  4. On the Specify Name and Location page, type Score1 for the Name, click Store the virtual machine in a different location, and then click Browse.

  5. Browse to and select C:\ClusterStorage\Volume1\ and then click Select Folder.

  6. On the Specify Generation page, click Next

  7. On the Assign Memory page, type 2048, make sure that Use Dynamic Memory for this virtual machine is checked and then click Next. For details on Hyper-V memory management, see this article.

  8. On the Configure Networking page, click External, and then click Next.

  9. On the Connect Virtual Hard Disk page, leave the default settings and click Next.

  10. On the Installation Options, select Install an operating system from a bootable CD/DVD-ROM, click on image file (.iso), click browse and load a Windows Server 2012 R2 ISO file. Click Next.

  11. On the Completing the New Virtual Machine Wizard page, click Finish.

  12. On the Summary page, confirm that high availability was successfully configured for the role. Click Finish.

  13. In the Failover Cluster Manager console, click Roles, right-click Score1 and click Start.

  14. In the Failover Cluster Manager console, click Roles, right-click Score1 and click Connect to complete the guest operating system installation.

  15. Once the installation completes, verify that you can access the Score1 virtual machine.

Perform a Live Migration for the Virtual Machine

  1. From a client computer in the network, send a continuous ping to Score1 by typing the following from a command prompt:

    Ping –t Score1

  2. In the Failover Cluster Manager, expand ClusterA.abc.com, and click Roles. Then right-click Score1, select Move, select Live Migration, and then click Select Node.

  3. Click ServerA2 and click OK.

  4. Get back to the client computer to monitor the pings to Score1. In our labs only one packet was lost, but the ping continued as the Score1 virtual machine failed over to ServerA2.

  5. In the Failover Cluster Manager, click Roles to confirm that ServerA2 owns the Score1 virtual machine now.

Closing Remarks

High availability is one of the top priorities in many data centers and IT departments. Windows Server 2012 R2 provides a robust clustering solution that can be used with many applications and services. File servers and Hyper-V servers are among the most common implementations of Windows Failover clustering.

This article provides a hands-on approach to the deployment and configuration of highly available file servers and host clustering with Hyper-V. In both scenarios, the aim is the same: Minimize a single point of failure and detect failures as they happen.