Let’s continue with the storage protocols supported by vSphere 5.5. In my previous article, we talked about FC and iSCSI, both block-based protocols; in this article I’m going to talk about FCoE and NFS, the last of which is the only file-based protocol supported by vSphere 5.5.
Fibre Channel over Ethernet (FCoE)
FCoE is a protocol that consolidates both worlds, Ethernet and FC. FCoE uses a CEE Ethernet (converged enhanced Ethernet) to send FC frames across a Ethernet network. This CEE Ethernet works at 10Gbps.
CEE Ethernet provides a reliable Ethernet network to carry these frames. It has ways to control the priority of the packages and eliminate the loss of packages (lossless Ethernet), and it also has other mechanisms that make it possible to carry FC frames on copper and optical cabling.
On a traditional FCoE network, there are also (FCoE) switches that have both Ethernet and FC capabilities; they receive the Ethernet packages and bridge them to the FC ports. In order to get access, hosts must have hardware CNAs (converged network adapters) or a software implementation of FCoE. VMware ESXi supports both implementations, software and hardware. In the specific case of software, the only requirement is a network adapter that has FCoE offloading capabilities. The handling of the FCoE stack is managed by ESXi like the software iSCSI adapter that ESXi also provides.
If you want to take a deeper look at FCoE and vSphere, there is an excellent blog post by Cormac Hogan, who is a senior technical marketing architect at VMware:
Network File System (NFS)
NFS is the only file-based storage protocol that is supported by VMware vSphere and specifically by version 3 of this protocol (at the time of writing of this article).
NFS is a NAS (network attached storage) implementation that lets the clients and users access files provided directly by a host/array on many different operating systems. NFS is one of the most common storage protocols out there because it works over Ethernet networks: Clients and servers connect on a TCP/IP session using the RPC protocol.
Clients mount a directory of the remote server on a local directory or mount point; by doing this they can write and read to files as if it were a local device. This is different from a block storage protocol because the clients do not need to create a filesystem in order to get access to the space, since the filesystem is controlled by the host or array that is presenting this NFS export (shared directory).
NFS does not require special infrastructure; it can use 1Gbps Ethernet, 10Gbps, etc., because all the communication is done over TCP/IP.
One of the downsides of NFS v3 is the fact that there is no load balancing between paths on the same subnet. The only approach for load balancing of NFS v3 in vSphere is to create different vmkernel interfaces and connect them to different exports.
ESXi access NFS exports using vmkernel Ports, so we need to create a vmkernel port as we did with software iSCSI (follow the same steps). This vmkernel port must have a valid IP address on the same subnet as the host/array presenting the NFS share.
FC, FCoE, and iSCSI are block-based storage protocols and, because of this, there is a need to create a file system to control the filesystem through the hypervisor. VMware has its own clustered filesystem, named VMFS.
VMFS is a “clustered” filesystem, so it can be accessed by many hosts (ESXi hosts) and they can work independently on different VMs files. The lock mechanism works at the file level so there is a way for more than one ESXi host to be actively working in a VMFS volume; this prevents corruption of the information. VMFS has a limit of 32 concurrent hosts accessing the same file system.
There are two locking mechanisms when there is a need to update or modify the file system metadata, SCSI–2 reservations (at LUN level) and ATS (Atomic Test and Set) blocking (at disk sector). These metadata updates are triggered by operations such as a creation of a VM (creation of files at the file system), snapshots (once again there is file creation), etc.; SCSI reservations lock the entire storage device so the metadata update can occur. This approach is used by storage arrays that don’t support hardware acceleration in a vSphere environment using VAAI (vSphere APIs for array integration). Performance degradation can be caused if there are many SCSI reservations on a given device. ATS (atomic rest and set) works by acquiring a lock on an specific sector of a disk so, when this operation is successful, an ESXi host can modify the VMFS metadata without having to lock the whole device.
VMs are stored on a VMFS filesystem as files and every VM has its own folder, as we can see on the image below:
VMFS provides VMs with SCSI access to the underlying storage, this has some benefits such as thin provisioning VMs hard disks at the VMFS layer, VM encapsulation that provides the ability to migrate or copy files to different storage technologies (e.g., FC to iSCSI), easier backup, DR, etc.
Creating a VMFS Datastore
There are some steps we must follow in order to be able to consume the space once we have storage presented to our hosts. Let’s start on how to create a datastore. For this example we are going to be working in a lab environment with software iSCSI; let’s remember that iSCSI is a block-based storage protocol so, in order to be able to use it, we must create a VMFS file system in top of target (LUN).
Once we have the target configured on the dynamic discovery tab (send targets) on our ESXi’s software iSCSI adapter, we need to perform a rescan to make sure we can see all the LUNs presented to our ESXi host. In this case, for simplicity I will rescan the whole cluster, so I right-click the cluster and perform a “Rescan Storage.”
A new window will pop up, asking us what we want to rescan, either VMFS volumes or storage devices. Let’s select both:
Once the rescan operation is completed, we must go to one of our ESXi hosts that is going to access the LUNs presented via iSCSI and make sure that the LUN(s) are visible. Click on the ESXi host > Manage > Storage > Storage Adapters, then select the software iSCSI adapter and click on the “Devices” tab:
As you can see in the image above, there is a LUN presented to this ESXi host (actually I’ve presented this LUN to the whole cluster) so we can start with the process of a creating a VMFS datastore: Let’s right click on our host and click on “New Datastore”:
This will open the wizard to create a new datastore or mount an NFS export. Go ahead and select VMFS:
Next we must give this new VMFS volume a name and select the LUN that is going to be formatted. In this case the LUN is presented by a Nexenta appliance:
In the next step, we can select between VMFS 3 and 5, 5 being the latest version, so go ahead and select VMFS 5:
The last step before reviewing our VMFS configuration is to configure how much space is going to be used by the LUN. As the best practice is always considered to have just one VMFS datastore per LUN, select the entire space, or just a portion, if you are planning to grow the datastore in the future from the remaining space:
The last step is to review the configuration and click on “Finish”:
This will create a new datastore; we can verify the creation of the datastore by going to Home > Storage:
Mounting an NFS Export
Mounting an NFS export is fairly easy. We must follow the same wizard for the creation of a VMFS datastore but select NFS as the type.
Before starting with the mounting of the NFS export, make sure you have the vmkernel port created for NFS communication; this vmkernel has to be on the same subnet as the host/array.
Go ahead and click on “New datastore” and select NFS:
In the next step, we must fill in the information about the host or array that is providing the NFS share. Fill in the IP and mount point; as you can see, the storage appliance that is providing storage to my environment has the IP 192.168.2.127, and the mount point is /volumes/nfs/nfs. Ask your storage team what is the mount point of the share you are about to mount:
Click on “Next” and review the configuration. Once the NFS export is mounted, we can verify it’s working on the storage section of our vSphere Web client:
I hope this article was useful. Stay tuned for my next articles to prepare for the VCP5-DCV exam.