Tag Archives: vSAN

Why vSAN is the storage of choice for VMware Cloud Foundation

Recently VMware annouced that Cloud Foundation would support external storage connectivity, when I first heard this I thought to myself are VMware out of their minds? Amongst customers who I meet and talk to more or less on a daily basis, I also spoke to a lot of customers at VMworld about this but the stance or direction from VMware has not changed, if you want a truly Software Defined Data-Center then vSAN is the storage of choice.

So why have VMware decided to allow external storage in Cloud Foundation?

Many customers who are either about to start or have already started their journey to a Software Defined Data Center and/or Hybrid Cloud are still using existing assets that they wish to continue to sweat out, it may well be that their traditional storage is only a couple of years old and VMware are not expecting customers to simply throw out existing infrastructure that was either recently purchased or still being sweat out as that would be a tall ask right?

Customers also still have workloads that can’t simply be migrated to the full SDDC, they take some time to plan the migration, maybe there is a re-architecture of the workload that needs to be performed, or maybe there’s a specific need where a traditional storage array has to be used until any obstacles have been overcome, VMware recognises this hence the support for external storage in Cloud Foundation.

There are also specific use cases where Hyper Converged powered by vSAN isn’t an ideal fit, cold storage or archive storage is one of these, so supporting an array that can provide a suplemental storage architecture to meet these requirements is also a plus point.

Will I lose any functionality by leveraging traditional storage with Cloud Foundation? Simple answer is yes for the following reasons:

  • No lifecycle management for external storage through SDDC Manager which means patching and updating of the storage is still a manual task.
  • No single pane of glass management out of the box for external storage without installing third party plugins which in my experience have a tendancy to break other things.
  • No automation of stretched cluster deployment on external storage, all the replication etc has to be configured manually.
  • Day-2 operations such as Capacity Reporting, Health Check and Performance Monitoring are lost without installing any third party software for the external storage.
  • Auto deployment of the storage during a Workload Domain deployment, all the external storage has to be prepared beforehand.
  • Losing the true “Software Defined Storage” aspect and granular object control, currently external storage support for VVOLS is not there right now either.

Also remember that with Cloud Foundation, the Management Workload Domain has to run on vSAN, you can not use external storage for this.

So if you want a truly Software Defined Data Center with all the automation of deployment, all the nice built in features for day-2 operations then vSAN is the first choice of storage, if you have existing storage you wish to sweat out whilst you migrate to a full SDDC then that’s supported too, and yes you can have a vSAN cluster co-exist with external storage too, which makes migrations so much easier.

The other aspect to look at this is in a Hybrid Cloud or Multi Cloud strategy, Cloud Foundation is all about providing a consistent infrastructure between your private and public clouds, so if your public cloud is using Cloud Foundation with vSAN then the logical choice is to have vSAN as your storage for your Cloud Foundation Private Cloud to have that consistent infrastructure for your workloads no matter where they may run.

QLC NVMe – could this signal the end of SAS and SATA?

I met with a team from Intel recently and discussed their new additions to the vSAN Compatibility Guide, mainly around their QLC NVMe drives. I have spoken to many customers around Full NVMe configurations on many occasions and usually there was a slightly higher price to pay for such configurations, but the QLC NVMe drives could be a turning point for future proofing your HCI platform because they are cheaper than your SAS/SATA Equivalent!

This being said, I have heard many times that the days of SATA/SAS based drives are numbered, but clearly with these QLC NVMe drives this could be much sooner rather than later.

Right now the 7.68TB D5-P4320 has been certified, and I have been informed by Intel that the 15.3TB one is currently going through certification, that’s now a game changer for delivering high amounts of capacity at a reasonable cost price. If I take the 4-Node Full NVMe cluster I have access too and replaced all the current NVMe devices for the 7.68TB QLC NVMe devices, I would have an effective usable capacity of 166TB and double that with the 15.3TB drives, this is based on RAID5 Storage Policy only and also taking into account the 10% difference between Device Capacity and Actual Capacity. So let’s take a look a bit more closely at these new QLC NVMe drives from Intel:

From the ARK portal we can determine the following information:

FormatU.2 2.5Inch
Sequential Read (up to)3200 MB/s
Sequential Write (up to)1000 MB/s
Random Read (100% span)427000 IOPS
Random Write (100% span)36000 IOPS
Latency – Read138 µs
Latency – Write30 µs
InterfacePCIe NVMe 3.1 x4

Now if you remember my blog around Full NVMe performance, combining Intel Optane with their NVMe drives will deliver a much more superior performance characteristic versus traditional SAS/SATA, however in addition to that with these new QLC NVMe drives it also reduces the cost of capacity, but just how much of a difference is it?

So I checked out the prices here in the UK, from the same supplier, here’s the link to the NVMe QLC Device and here’s the link to a SAS Equivalent.

For the benefit of this exercise I compared the lowest cost SAS 12G 7.68TB Drive on the vSAN Compatibility Guide since Intel do not manufacture SAS based SSDs and vendors seem to favour SAS based SSDs over SATA

Correct as of 11th August 2019Samsung 7.68TB
SAS 12G
Intel P4320 7.68TB
QLC NVMe
Capacity7.68TB7.68TB
InterfaceSASNVMe
Total Cost of Drive£3093.60£1609.20
Cost per GB£0.40£0.20
DWPD10.2

As you can clearly see, the cost per GB is significantly lower at £0.20 per GB (this falls to around £0.18 per GB on the larger 15.3TB device), however there is one thing to note, the DWPD of the QLC NVMe device is much lower in comparison to the SATA device but in a vSAN environment should this matter too much? The simple answer here is no, but if we look at the maths, if I had 8 of the QLC devices in each host in my 4-node cluster, and I have a usable capacity of 166TB, at 0.2 DWPD that means I would have to be writing 33.2TB of data per day to hit the 0.2 DWPD, so the lower DWPD in a vSAN environment is not significant unless you are constantly writing fresh data that would exceed the above.

I am hoping that I can get some of these QLC NVMe drives from Intel to get some performance data from them in order to complete the write up and give some performance characteristics, but based on my previous full NVMe performance testing I would not expect them to be lower than those previous tests.

Full NVMe or not Full NVMe, that is the question

Image result for nvme logo

As you have seen, my recent posts have been around Intel Optane and the performance gains that can be delivered by implementing the technology into a vSAN environment. I have been asked many times about what benefits a full NVMe solution would bring and what such a solution would look like, but before we go into that, let’s talk about NVMe, what exactly is NVMe?

Non-Volatile Memory Express (NVMe) is not a drive type, but more of an interface and protocol solution that looks like is set to replace the SAS/SATA interface. It encompasses a PCIe controller and the whole purpose of NVMe is to exploit the parallelism that flash media provides which in turn reduces the I/O overhead and thus improve performance. As SSDs become faster, protocols like SAS/SATA which were designed for slower hard disks where the delay between the CPU request and data transfer was much higher, the requirement for faster protocols become evident, and this is where NVMe comes into play.

So in a vSAN environment, what does a full NVMe solution look like? Because vSAN is currently a two tier architecture (Cache and Capacity) a full NVMe solution would mean that both tiers have to have NVMe capable drives and this can be done with either all Standard NVMe drives in both cache and capacity, or using a technology like Intel Optane NVMe as the Cache and Standard NVMe as capacity. So from an architecture perspective it is pretty straight forward, but how does performance compare, for this I persuaded my contacts at Intel to provide me some Full NVMe kit in order to perform some benchmark tests, and in order to provide a like for like comparison, I ran the same benchmark tests on an Optane+SATA configuration.

Cluster Specification:
Number of Nodes: 4
Network: 2x 10gbit in LACP configuration
Disk groups per node: 2
Cache Tier both clusters: 2x Intel Optane 375GB P4800X PCIe Add In Card
Capacity Tier Optane/SATA: 8x 3.84TB SATA S4510 2.5″
Capacity Tier Full NVMe: 8x 2.0TB NVMe P4510 2.5″ U.2

Test Plan:
Block Size: 4K, 8K, 16K, 32K, 64K, 128K
I/O Pattern: Random
Read/Write Ratio: 0/100, 30/70, 70/30, 100/0
Number of VMs: 120
Number of VMDKs per VM: 1
Size of VMDK: 50GB
Storage Policy: FTT=1, RAID1

Let’s look at the results:

And if you want the numbers:

So what is clear here that Optane serves really well in the cache tier in both solutions, however in the Full NVMe solution read performance is significantly improved also, in the 128K, 100% read test the 2x10G Links were being pushed to their limits, but not only was we able to push up throughput and IOPS but we also drove down latency, in some cases reducing it by over 50%.

So why would you choose a full NVMe solution? The simple answer here is if you have applications that are latency sensitive then having clusters dedicated to those applications would be adequately provided for from an IOPS, Throughput and Latency perspective with Full NVMe.

Vendors have also recognised this, for example Dell EMC have just launched their Intel Optane Powered Full NVMe vSAN Ready node, based on the R740xd platform and consists of similar drives to what I have used in the tests here being the Optane 375GB and P4510 U.2 NVMe drives, you can see the vSAN ready node details here

So clearly NVMe has major performance benefits over traditional SAS/SATA devices, could this be the end of SAS/SATA in the not so distant future?