Tag Archives: Storage Policy

vSAN 7.0U1 – A New Way to Add Capacity

As we all know there are a number of ways of scaling capacity in a vSAN environment, you can add disks to existing hosts and scale the storage independently of compute, or you can add nodes to the cluster and scale both the storage and compute together, but what if you are in a situation where you do not have any free disk slots available, and / or you are unable to add more nodes to the existing cluster? Well vSAN 7.0U1 comes with a new feature called vSAN HCI Mesh, so what does this mean and how does it work?

Let’s take the scenario below, we have two vSAN clusters in the same vCenter, Cluster A is nearing capacity from a storage perspective, but the compute is relatively under utilised, there are no available disk slots to expand out the storage. Cluster B on the other hand has a lot of free storage capacity but is more utilised on the compute side of things:

Now the vSAN HCI Mesh will allow you to consume storage on a remote vSAN cluster providing it exists within the same vCenter inventory, there are no special hardware / software requirements (apart from 7.0U1) and the traffic will leverage the existing vSAN network traffic configuration.

This cool feature adds an elastic capability to vSAN Clusters, especially if you need to have some additional temporary capacity for application refactoring or service upgrade where you want to deploy the new services but keep the old one operational until the transition is made.

VMware has not left the monitoring capabilities of such use out either, in the UI you can monitor the usage of “Remote VM” from a capacity perspective as well as within the performance service

So this clearly allows dissagregation of storage and compute in a vSAN environment and offers that flexibility and elasticity of storage consumption are there any limitations?

  • A vSAN cluster can only mount up to 5 remote vSAN Datastores
  • The vSAN Cluster must be able to access the other vSAN cluster(s) via the vSAN Network
  • vSphere and vCenter must be running 7.0U1 or later
  • Enterprise and Enterprise Plus editions of vSAN
  • Enough hosts / configuration to support storage policy, for example if your remote cluster has only four hosts, you cannot use a policy which requires RAID6

So this is a pretty cool feature and sort of elliminates the need for Storage Only vSAN nodes which was discussed in the past at many VMworlds

QLC NVMe – could this signal the end of SAS and SATA?

I met with a team from Intel recently and discussed their new additions to the vSAN Compatibility Guide, mainly around their QLC NVMe drives. I have spoken to many customers around Full NVMe configurations on many occasions and usually there was a slightly higher price to pay for such configurations, but the QLC NVMe drives could be a turning point for future proofing your HCI platform because they are cheaper than your SAS/SATA Equivalent!

This being said, I have heard many times that the days of SATA/SAS based drives are numbered, but clearly with these QLC NVMe drives this could be much sooner rather than later.

Right now the 7.68TB D5-P4320 has been certified, and I have been informed by Intel that the 15.3TB one is currently going through certification, that’s now a game changer for delivering high amounts of capacity at a reasonable cost price. If I take the 4-Node Full NVMe cluster I have access too and replaced all the current NVMe devices for the 7.68TB QLC NVMe devices, I would have an effective usable capacity of 166TB and double that with the 15.3TB drives, this is based on RAID5 Storage Policy only and also taking into account the 10% difference between Device Capacity and Actual Capacity. So let’s take a look a bit more closely at these new QLC NVMe drives from Intel:

From the ARK portal we can determine the following information:

FormatU.2 2.5Inch
Sequential Read (up to)3200 MB/s
Sequential Write (up to)1000 MB/s
Random Read (100% span)427000 IOPS
Random Write (100% span)36000 IOPS
Latency – Read138 µs
Latency – Write30 µs
InterfacePCIe NVMe 3.1 x4

Now if you remember my blog around Full NVMe performance, combining Intel Optane with their NVMe drives will deliver a much more superior performance characteristic versus traditional SAS/SATA, however in addition to that with these new QLC NVMe drives it also reduces the cost of capacity, but just how much of a difference is it?

So I checked out the prices here in the UK, from the same supplier, here’s the link to the NVMe QLC Device and here’s the link to a SAS Equivalent.

For the benefit of this exercise I compared the lowest cost SAS 12G 7.68TB Drive on the vSAN Compatibility Guide since Intel do not manufacture SAS based SSDs and vendors seem to favour SAS based SSDs over SATA

Correct as of 11th August 2019Samsung 7.68TB
SAS 12G
Intel P4320 7.68TB
QLC NVMe
Capacity7.68TB7.68TB
InterfaceSASNVMe
Total Cost of Drive£3093.60£1609.20
Cost per GB£0.40£0.20
DWPD10.2

As you can clearly see, the cost per GB is significantly lower at £0.20 per GB (this falls to around £0.18 per GB on the larger 15.3TB device), however there is one thing to note, the DWPD of the QLC NVMe device is much lower in comparison to the SATA device but in a vSAN environment should this matter too much? The simple answer here is no, but if we look at the maths, if I had 8 of the QLC devices in each host in my 4-node cluster, and I have a usable capacity of 166TB, at 0.2 DWPD that means I would have to be writing 33.2TB of data per day to hit the 0.2 DWPD, so the lower DWPD in a vSAN environment is not significant unless you are constantly writing fresh data that would exceed the above.

I am hoping that I can get some of these QLC NVMe drives from Intel to get some performance data from them in order to complete the write up and give some performance characteristics, but based on my previous full NVMe performance testing I would not expect them to be lower than those previous tests.