VCP5 prep: Creating and Configuring VMFS and NFS Datastores – Part 1

Any type of virtual infrastructure needs some kind of storage for different types of files, mainly virtual machine files. Nowadays there are various types of storage technologies such as FC, FCoE, iSCSI, NFS, etc. In this module, we will be taking a look at what type of storage can be presented to a VMware infrastructure (vSphere) so we can create VMs and set up cluster services and how we can configure the storage in vSphere.

Supported Storage in a vSphere Environment

Let’s start by learning what kind of storage is supported by VMware vSphere 5.x:

  • * FC—vSphere 5.5 supports 4Gb, 8Gb, and 16Gb (16Gb end-to-end support was added in version 5.5).
  • * NFS—vSphere supports only NFS v3, so we must be sure any exports presented to an ESXi host use version 3 of NFS.
  • * FCoE—vSphere 5.5 supports both software FCoE and hardware FCoE. “Software FCoE” refers to a network card that offers some I/O offloading capabilities and DCB (data center bridging); protocol stack and processing are managed by the hypervisor (ESXi) on a software implementation of FCoE.
  • * iSCSI—vSphere 5x supports three different types of iSCSI: software iSCSI, hardware iSCSI, and dependent hardware iSCSI.

vSphere also supports DAS storage (direct attached storage), although it has some limitations in terms of what capabilities can be used through DAS storage, such as some cluster services (vMotion, DRS, etc.).

iSCSI

iSCSI (Internet small computer system interface) is an IP-based storage protocol that lets the client (in this case an ESXi host) communicate with the storage array using SCSI commands that are encapsulated on IP packets. This protocol uses traditional Ethernet infrastructure, so there is no need for purpose-built infrastructure and cabling as in FC.

There are several components of an iSCSI session. Let’s take a look:

  • * Initiator—This is the “client” of an iSCSI session; this client requests access to the storage presented by the array, using SCSI commands encapsulated on IP packets.
  • * Target—This is the “server,” basically the storage array.
  • * Target portal—This is basically a component of the “target” or storage array that has a TCP/IP network address and a port (typically 3260) defined. iSCSI initiators start sessions to the target using the network address and port of the target portal.

VMware vSphere supports the following types of iSCSI initiators (this is how the hypervisor connects to the target):

  • * Independent hardware iSCSI—Physical adapters that offload network and iSCSI processing from the ESXi host; in this type of initiator, the network parameters are configured through the HBA’s BIOS and it can be modified later through the vSphere web client or the C+ client.
    * Dependent hardware iSCSI—This type of initiator depends on the ESXi host to configure its networking stack and initiator configuration. This initiator’s physical adapters have offloading capabilities like TOE, but it is not a dedicated iSCSI HBA. If we want to use this type of initiator, we need to create a vmkernel port in vSphere and link it to the network adapter.Software iSCSI—This is a software-based implementation of an iSCSI initiator. It resides as a component of the vmkernel. It also requires a vmkernel port and this vmkernel has to be linked to a network adapter.

There is a need to identify the nodes that are part of an iSCSI connection. There are four different components of the “identity” of a node in an iSCSI session:

  • * IQN (iSCSI Qualified Name)—This identifier is constructed in this way:

(ESXi software initiator)

  • > YYYY–MM—The first part after “iqn” is the year and month in which the company (the naming authority) registered this domain or sub-domain.
  • > naming–authority—Reverse syntax of the internet domain name of the naming authority; in this case, it is “com.vmware,” so we are talking about a VMware naming authority that maintains the domain or sub-domains that are being used for the addressing.
  • > Unique name—For ESXi,
    this usually translates to the name of the host and a random string of characters, although it is very convenient to modify the unique name to something shorter so it can be managed more easily.
  • * EUI (extended unique identifier)Not commonly used by the HBA vendors, this identifier is constructed of 16 characters (64 bits). The first 24 bits are assigned by the IEEE and they translate into the company name, while the remaining bits are used as an unique id (e.g., a serial number).
  • * iSCSI alias—This is a more “friendly” approach to identifying a node in an iSCSI session. They are manually assigned. An example of a iSCSI alias is “ESXi 1.”
  • * IP address—Both initiators and targets need an IP address in order to start a session. iSCSI is commonly used only at L2 domains because routing or crossing L3 is not supported by many vendors (including VMware).

Once an iSCSI session is established, ESXi can use two different methods to discover what is being presented by the target (storage array):

  • * Dynamic discovery or “send targets”—In this method, the initiator sends the “send targets” request to the storage array and the storage answers with a list of targets available to that specific initiator.
  • * Static discovery—With this method we can manually add the information about the target, as shown in the image below:

There is a way to implement basic authentication for iSCSI sessions on a VMware environment; it is accomplished through CHAP, or challenge–handshake authentication protocol. CHAP works by defining a “secret” or password that initiators must send to the targets or bidirectionally:

To learn more about the specific levels of CHAP take a look at VMware’s documentation center for vSphere 5.5:

Choosing CHAP authentication Method

How to Configure Software iSCSI in VMware vSphere?

Now it is time to learn what is needed to set up iSCSI storage for VMware ESXi hosts; we’ll be taking a look at the required steps to accomplish this, using the vSphere web client. It is important to note that we can configure iSCSI using other methods, such as esxcli.

Step 1. Create a vmkernel port—This port is going to be used by the software implementation of iSCSI to access the network. To create the vmkernel, we must perform the following steps:

Go to ESXi host > Manage > Networking > Virtual Switches and click on “Add host networking”:

Select “VMkernel Network Adapter” as the connection type and click “Next”:

Select the dvPortgroup (distributed vSwitch port group) or a standard vSwitch where the VMkernel port is going to be created [in this case we are going to add to a standard vSwitch (vSwitch1)] and click “Next”:

Next we must give a network label to the VMkernel port, define a VLAN, select if this VMkernel port is going to use TCP/IP v4, v6, or both, and we must not select any kind of service for this port:

In the last step, we must configure the IP settings for this VMkernel port:

For multipathing purposes in iSCSI, we must be sure that only one vmnic (Ethernet port/adapter) is actively assigned for this VMkernel portgroup and the other vmnics that are present on the vSwitch are set as “unused.”

Step 2. Enable the software iSCSI adapter—In order to add the iSCSI software adapter, we must follow these steps:

Go to ESXi host > Manage > Storage > Storage Adapters, click on “Add new storage adapter” and select “Software iSCSI.” This will add a new vmhba as the software iSCSI adapter:

Once the adapter is added, we need to click on “Network port binding” and then click on “Add” so we can link the VMkernel portgroup that is going to be used for iSCSI communication:

Once the VMkernel port is linked to the iSCSI software adapter, we must configure the target parameters. For demonstration purposes, I will add a dynamic discovery of a target; we do this by clicking on “Targets” and then “Add”:

You can see in the image that there is an “Authentication” button; this is where we configure the CHAP security, in case it is needed. As a final step we must perform a “rescan” of the adapter (software iSCSI) so we can discover the devices that are available for this ESXi host:

As a final word about the software iSCSI adapter, if we want to change the IQN (the unique name portion) we must follow these steps:

Go to ESXi host > Manage > Storage > Storage Adapters, select the software iSCSI adapter and click on the “properties” tab, then click on “Rdit”:

Fibre Channel (FC)

Fibre Channel is one of the most adapted storage protocols out there. It uses optical cabling to transmit the information. Because of this, FC is a lossless protocol with very little latency (this depends on the type of fabric architecture). Currently the most common implementations of FC are 4 and 8 Gbps, but there are emerging technologies that can support up to 20Gbps. There is a need for dedicated hardware to create a fabric (FC network), FC directors, switches, HBAs, etc. FC is a block-based protocol that can transmit different kinds of upper layer protocols such a SCSI for data transmission.

The FC protocol is controlled by the T11 committee. You can take a look at the current drafts in this link:

T11 Committee FC drafts

With the release of vSphere 5.5, VMware added end-to-end support of 16Gbps FC, so we can configure a full duplex fabric.

FC Components

There are three major components in a FC fabric: nodes, cabling, and interconnecting devices:

  • * Nodes—These are basically the HBAs of the hosts, storage array ports/processors and other devices, such as tape libraries.
  • * Cabling—Physical fibre optical cables that connect the different components in a fabric; there are different types like single-mode and multi-mode fibre cabling. The difference between them is beyond the scope of this article.
  • * Interconnecting devices—Here we can find switches, hubs (at the time of writing this type of technology is rarely used), and directors (high-end switches).

FC operating modes (connectivity)

There are three ways to interconnect the different components of a FC fabric:

  • * Point–to–point—In this type of connection, two different components are directly connected; clearly this provides limited connectivity. An example of this type of connection is DAS (direct attached storage).
  • * Switched Fabric—in this type of connectivity, a switch or network of switches provides connectivity to many nodes and these nodes can communicate with each other without disruption of other sessions. This is the most common way of interconnecting devices in a fabric.
  • * Arbitrated Loop—In this type of connectivity, a hub provides connectivity to the nodes, like a token ring network. Devices must contend with each other to perform I/O operations.

Using the switched fabric connectivity is the most convenient way to interconnect multiple devices. That’s why it’s the most common way to create a FC fabric.

Type of Ports in a Switched Fabric

There are different kinds of ports in a switched fabric. Let’s take a look at the most relevant from a vSphere perspective:

  • * N_Port —This type of port is also known as a node port.
    N_Ports are end points in a fabric (HBAs, storage ports, etc.) that are connected to a switch.
  • * F_Port —Also known as a fabric port, this is where the N_Port is connected (a switch port).

It’s important to note that these are not the only types of ports in a switched fabric, but they are the most relevant from a vSphere perspective. FC uses other kind of ports, such as G and E ports, but they are beyond the scope of this article.

FC Addressing and Identifiers

FC uses three different “identifiers” or ways to know the identity of a node or port: The first is the FC address that is assigned dynamically when a node logs into the fabric. This type of address is not so relevant from a vSphere perspective. Let’s take a look at the other two WWN (world wide name) identifiers:

  • * WWPN (world wide port name)—This type of identifier is physically assigned to a port, e.g., HBA port. This one can’t be generated as it identifies an specific port of a node.
  • * WWNN (world wide node name)—This type of WWN is assigned to a node in the fabric and it identifies the node, e.g., EMC SAN).

WWNs are controlled by the IEEE and any company that want to generate WWNs must register in order to get a company ID, also known as organizationally unique identifier (OUI); as an example, EMC has the following six digits assigned: 006048. An example of a WWN would be: 50060482B82F9654.

How to Access FC Storage?

In order to access FC storage, we must have FC HBAs on the ESXi hosts. These HBAs (fabric nodes) must be part of a “zone” that is allowed to access the LUNs presented by the storage processors of the storage array.

Be the first to hear of new free tutorials, training videos, product demos, and more. We'll deliver the best of our free resources to you each month, sign up here:

Zoning lets us create logical segments of nodes so that the nodes that are part of a segment can communicate with each other. With zoning we can be sure that only the nodes that are part of the zone can communicate with each other. Let’s take a look at the different types of zoning:

  • * WWN Zoning—with this type of zoning, the only members of the zone are the WWNs of the HBA and the storage processors; this is the most flexible way to create zones because there are no physical components in the zone (like ports) only logical identifiers (WWNs).
  • * Port Zoning—This type of zoning uses the physical ports of a switch so that the node gets access to different zones, depending on what port of a switch it is connected to and whether it is allowed on that specific port.
  • * Mixed Zoning—This is a combination of port and WWN zoning; with this type of zoning, we tie a port to a node.

Each type of zoning has its benefits, but we will not take a deeper look right now as it is not within the scope of the article.

There is another process called masking that is performed at the array level and lets us “hide” an specific set of LUNs to nodes.

Once we have the correct zoning for our fabric, the HBAs of our ESXi hosts must be able to access the storage that is being presented by the storage array.

Stay tuned for the second part of this article, where I’m going to talk about FCoE and NFS in a vSphere environment. We are also going to talk about the VMFS filesystem.

The following two tabs change content below.

Agustín Malanco

Agustín Malanco (@agmalanco) - VCP-DCV 3,4,5, VCAP4/5-DCA, VCAP4/5-DCD, VCAP5-CID/CIA, VCAP5-DTD , VCP4/5-DT, VCP-Cloud. he has been working with VMware's technologies for almost 7 years, he has worked in various fields, from consulting, support, to training. Agustín has it's own blog (blog.hispavirt.com) he has been recognized as vExpert2011,2012 & 2013 for his contributions to the community.