Tag Archives: ESXi

VROC vs Traditional RAID Controller: Optimizing Boot Devices in VMware vSAN Ready Nodes

The world of data storage is constantly evolving, with technology advancements aiming to maximize performance and efficiency in data center operations. One of the recent innovations in storage technology is Intel’s Virtual RAID on CPU (VROC), which has gained traction among IT professionals and enthusiasts. This article will compare the use of VROC as a boot device to the more traditional RAID controller in a VMware vSAN Ready Node, highlighting the advantages and disadvantages of each approach.

VROC: The New Kid on the Block

Intel VROC is a hybrid software/hardware-based RAID solution that utilizes the CPU for RAID processing rather than a dedicated hardware RAID controller. VROC can be configured using NVMe SSDs (as well as SATA SSDs), offering high-performance storage with lower latency compared to traditional RAID controllers. Let’s dive into some of the advantages and disadvantages of using VROC as a boot device in a vSAN Ready Node.

Advantages of VROC

Performance: VROC allows for better performance and reduced latency by eliminating the need for a dedicated RAID controller. This results in faster data processing and retrieval, which is crucial in virtualized environments.

Scalability: With VROC, you can easily expand your storage capacity by adding NVMe SSDs without the need for additional RAID controllers. This enables seamless growth of your vSAN Ready Node as your storage needs increase.

Cost Savings: VROC can reduce the cost of your vSAN Ready Node by eliminating the need for additional RAID controllers. Furthermore, as a software-based solution, VROC can leverage existing hardware resources, resulting in lower capital expenditures.

Traditional RAID Controller: Tried and Tested

A traditional RAID controller is a dedicated hardware component responsible for managing storage arrays and ensuring data redundancy. These controllers have been widely used in data centers for decades, providing a reliable and stable solution for storage management. Here are some advantages and disadvantages of using traditional RAID controllers as boot devices in vSAN Ready Nodes.

Advantages of Traditional RAID Controllers

Familiarity: Traditional RAID controllers are well-known and widely understood by IT administrators, making them a comfortable and familiar choice for managing storage in vSAN Ready Nodes.

Hardware Independence: Unlike VROC, traditional RAID controllers do not tie you to specific hardware vendors, allowing for more flexibility in hardware selection.

Conclusion

Choosing between VROC and traditional RAID controllers for boot devices in vSAN Ready Nodes ultimately depends on your organization’s priorities and requirements. VROC offers better performance, scalability, and cost savings but comes with vendor lock-in and increased complexity. On the other hand, traditional RAID controllers provide familiarity and hardware independence but may fall short in terms of performance and cost-efficiency.

It is essential to carefully evaluate the specific needs of your environment before deciding which solution is best suited for your vSAN Ready Node. By considering factors such as performance, scalability, cost, and ease of management, you can make an informed decision that will optimize your VMware vSAN Ready Node for long-term success. As storage technologies continue to evolve, staying abreast of new developments, such as VROC, can help ensure your organization remains agile and well-prepared to adapt to the ever-changing data center landscape. Ultimately, the choice between VROC and traditional RAID controllers should be guided by a thorough understanding of your specific storage needs, allowing you to maximize performance and efficiency in your virtualized environment.

vSAN 7.0U1 – A New Way to Add Capacity

As we all know there are a number of ways of scaling capacity in a vSAN environment, you can add disks to existing hosts and scale the storage independently of compute, or you can add nodes to the cluster and scale both the storage and compute together, but what if you are in a situation where you do not have any free disk slots available, and / or you are unable to add more nodes to the existing cluster? Well vSAN 7.0U1 comes with a new feature called vSAN HCI Mesh, so what does this mean and how does it work?

Let’s take the scenario below, we have two vSAN clusters in the same vCenter, Cluster A is nearing capacity from a storage perspective, but the compute is relatively under utilised, there are no available disk slots to expand out the storage. Cluster B on the other hand has a lot of free storage capacity but is more utilised on the compute side of things:

Now the vSAN HCI Mesh will allow you to consume storage on a remote vSAN cluster providing it exists within the same vCenter inventory, there are no special hardware / software requirements (apart from 7.0U1) and the traffic will leverage the existing vSAN network traffic configuration.

This cool feature adds an elastic capability to vSAN Clusters, especially if you need to have some additional temporary capacity for application refactoring or service upgrade where you want to deploy the new services but keep the old one operational until the transition is made.

VMware has not left the monitoring capabilities of such use out either, in the UI you can monitor the usage of “Remote VM” from a capacity perspective as well as within the performance service

So this clearly allows dissagregation of storage and compute in a vSAN environment and offers that flexibility and elasticity of storage consumption are there any limitations?

  • A vSAN cluster can only mount up to 5 remote vSAN Datastores
  • The vSAN Cluster must be able to access the other vSAN cluster(s) via the vSAN Network
  • vSphere and vCenter must be running 7.0U1 or later
  • Enterprise and Enterprise Plus editions of vSAN
  • Enough hosts / configuration to support storage policy, for example if your remote cluster has only four hosts, you cannot use a policy which requires RAID6

So this is a pretty cool feature and sort of elliminates the need for Storage Only vSAN nodes which was discussed in the past at many VMworlds

Creating a vSAN Cluster without a vCenter Server

I have been asked many times about creating a 3-node vSAN cluster without a vCenter server, the main reason for doing this is that you need to place your vCenter server onto the vSAN datastore but have no where to host the vCenter server until doing so.? The many customers I have spoken to are not aware that they can do this from the command line very easily.? In order to do this you must have installed ESXi 6.0 U2 and enabled SSH access to the host, there are a few steps in order to do this

  1. Configure the vSAN VMKernel Interface
  2. Create the vSAN Cluster
  3. Add the other nodes to the cluster
  4. Claim the disks

Step 1 – Create the VMKernel interface
In order for vSAN to function you need to create a VMKernel Interface on each host, this requires other dependencies such as a vSwitch and a Port Group, so performing this on all three hosts is a must so lets do it in this order, firstly lets create our vSwitch, since vSwitch0 exists for the management network we’ll create a vSwitch1

esxcli network vswitch standard add -v vSwitch1

Once our vSwitch1 is created we then need to add the physical uplinks to our switch, to help identify which uplinks to use we run the following command

esxcli network nic list

This should return details on all the physical network cards on the host for example:

Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:01:00.0 ntg3 Up 1000Mbps Full 44:a8:42:29:fe:98 1500 Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet
vmnic1 0000:01:00.1 ntg3 Up 1000Mbps Full 44:a8:42:29:fe:99 1500 Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet
vmnic2 0000:02:00.0 ntg3 Down 0Mbps Half 44:a8:42:29:fe:9a 1500 Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet
vmnic3 0000:02:00.1 ntg3 Down 0Mbps Half 44:a8:42:29:fe:9b 1500 Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet
vmnic4 0000:82:00.0 ixgbe Up 10000Mbps Full a0:36:9f:78:94:cc 1500 Intel Corporation Ethernet Controller 10 Gigabit X540-AT2
vmnic5 0000:82:00.1 ixgbe Up 10000Mbps Full a0:36:9f:78:94:ce 1500?? Intel Corporation Ethernet Controller 10 Gigabit X540-AT2
vmnic6 0000:04:00.0 ixgbe Up 10000Mbps Full a0:36:9f:78:94:c4 1500 Intel Corporation Ethernet Controller 10 Gigabit X540-AT2
vmnic7 0000:04:00.1 ixgbe Up 10000Mbps Full a0:36:9f:78:94:c6 1500 Intel Corporation Ethernet Controller 10 Gigabit X540-AT2

For my cluster I am going to add vmnic5 to the vSwitch1 so for this I run the following command:

esxcli network vswitch standard uplink add -v vSwitch1 -u vmnic5

Now that we now have our uplink connected to vSwitch1 we need to configure a portGroup for vSAN, for this I am calling my portGroup name “vSAN”

esxcfg-vswitch -A vSAN vSwitch1

Now we need to create out VMKernel interface with an IP Address (192.168.100.1 for Host 1), Subnet Mask and assign it to the “vSAN” portGroup

esxcfg-vmknic -a -i 192.168.100.1 -n 255.255.255.0 -p vSAN

We validate our VMKernel Interface by running the following command:

[root@se-emea-vsan01:~] esxcfg-vmknic -l
Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack
vmk0 Management Network IPv4 172.16.101.1 255.255.252.0 172.16.103.255 44:a8:42:29:fe:98 1500 65535 true STATIC defaultTcpipStack
vmk1 vSAN IPv4 192.168.100.1 255.255.255.0 192.168.100.255 00:50:56:6a:5d:06 1500 65535 true STATIC defaultTcpipStack

In order to add the VMKernel interface to vSAN we need to run the following command:

esxcli vsan network ip add -i vmk1

Repeat the above steps on the two remaining hosts that you wish to participate in the cluster

Step 2 – Creating the cluster
Once we have all the VMKernel interfaces configured on all hosts, we now need to create a vSAN Cluster on the first host, to do this we run the following command

esxcli vsan cluster new

Once completed we can get our vSAN Cluster UUID by running the following command:

[root@se-emea-vsan01:~] esxcli vsan cluster get
Cluster Information
 Enabled: true
 Current Local Time: 2016-11-21T15:17:57Z
 Local Node UUID: 582a29ea-cbfc-195e-f794-a0369f7894c4
 Local Node Type: NORMAL
 Local Node State: MASTER
 Local Node Health State: HEALTHY
 Sub-Cluster Master UUID: 582a2bba-0fd8-b45a-7460-a0369f749a0c
 Sub-Cluster Backup UUID: 582a29ea-cbfc-195e-f794-a0369f7894c4
 Sub-Cluster UUID: 52bca225-0520-fd68-46c4-5e7edca5dfbd
 Sub-Cluster Membership Entry Revision: 6
 Sub-Cluster Member Count: 1
 Sub-Cluster Member UUIDs: 582a29ea-cbfc-195e-f794-a0369f7894c4
 Sub-Cluster Membership UUID: d2dd2c58-da70-bbb9-9e1a-a0369f749a0c

Step 3 – Adding the other nodes to the cluster
From the remaining hosts run the following command adding them to the newly created cluster

esxcli vsan cluster join -u 52bca225-0520-fd68-46c4-5e7edca5dfbd

You can verify that the nodes have successfully joined the cluster by running the same command we ran earlier noting that the Sub-Cluster Member Count has increased to 3 and it also shows the other sub cluster UUID Members:

[root@se-emea-vsan01:~] esxcli vsan cluster get
Cluster Information
 Enabled: true
 Current Local Time: 2016-11-21T15:17:57Z
 Local Node UUID: 582a29ea-cbfc-195e-f794-a0369f7894c4
 Local Node Type: NORMAL
 Local Node State: MASTER
 Local Node Health State: HEALTHY
 Sub-Cluster Master UUID: 582a2bba-0fd8-b45a-7460-a0369f749a0c
 Sub-Cluster Backup UUID: 582a29ea-cbfc-195e-f794-a0369f7894c4
 Sub-Cluster UUID: 52bca225-0520-fd68-46c4-5e7edca5dfbd
 Sub-Cluster Membership Entry Revision: 6
 Sub-Cluster Member Count: 3
 Sub-Cluster Member UUIDs: 582a29ea-cbfc-195e-f794-a0369f7894c4, 582a2bf8-4e36-abbf-5318-a0369f7894d4, 582a2c3b-d104-b96d-d089-a0369f78946c
 Sub-Cluster Membership UUID: d2dd2c58-da70-bbb9-9e1a-a0369f749a0c

Step 4 – Claim Disks
Our cluster is now created and we need to claim the disks in each node to be used by vSAN, in order to do this we first of all need to identify which disks are to be used by vSAN as a Cache Disk and as Capacity Disks, and obviously the number of disk groups, to show the disk information for the disks in the host run the following command:

esxcli storage core device list

This will produce an output similar to the following where we can identify the NAA ID for each device:

naa.500003965c8a48a4
 Display Name: TOSHIBA Serial Attached SCSI Disk (naa.500003965c8a48a4)
 Has Settable Display Name: true
 Size: 381554
 Device Type: Direct-Access
 Multipath Plugin: NMP
 Devfs Path: /vmfs/devices/disks/naa.500003965c8a48a4
 Vendor: TOSHIBA
 Model: PX02SMF040
 Revision: A3AF
 SCSI Level: 6
 Is Pseudo: false
 Status: on
 Is RDM Capable: true
 Is Local: true
 Is Removable: false
 Is SSD: true
 Is VVOL PE: false
 Is Offline: false
 Is Perennially Reserved: false
 Queue Full Sample Size: 0
 Queue Full Threshold: 0
 Thin Provisioning Status: yes
 Attached Filters:
 VAAI Status: unknown
 Other UIDs: vml.0200000000500003965c8a48a450583032534d
 Is Shared Clusterwide: false
 Is Local SAS Device: true
 Is SAS: true
 Is USB: false
 Is Boot USB Device: false
 Is Boot Device: false
 Device Max Queue Depth: 64
 No of outstanding IOs with competing worlds: 32
 Drive Type: physical
 RAID Level: NA
 Number of Physical Drives: 1
 Protection Enabled: false
 PI Activated: false
 PI Type: 0
 PI Protection Mask: NO PROTECTION
 Supported Guard Types: NO GUARD SUPPORT
 DIX Enabled: false
 DIX Guard Type: NO GUARD SUPPORT
 Emulated DIX/DIF Enabled: false

In my setup I want to create two disk groups per host consisting of 4 capacity devices plus my cache so to create one disk group I run the following command:

esxcli vsan storage add -s <naa for cache disk> -d <naa for capacity disk 1> -d <naa for capacity disk 2> -d <naa for capacity disk 3> -d <naa for capacity disk 4>

Once you have performed the above on each of your hosts, your vSAN cluster is deployed with storage and you can now deploy your vCenter appliance onto the vSAN datastore where then you can manage your vSAN License, Storage Policies, switch on vSAN Services such as iSCSI, health service and performance services as well as start to deploy virtual machines