Tag Archives: vSAN

Why a NAND based SSD may not be the right choice for vSAN cache Tier

As we all know storage media has evolved very quickly over the past few years, the decline of the spinning disk and the move to flash based storage devices, but also the shift from SAS/SATA protocol based drives to NVMe protocol based drives in order to address the performance limitations of older protocols that were designed for spinning disks and not for SSDs.

A question I get asked regularly is what type of SSD is best for the vSAN Cache tier, there are vSAN ready nodes out there that contain NAND based SSDs for both the cache tier and for the capacity tier, but then there are other technologies like Intel Optane™ SSDs being used for the cache tier, so let’s talk about the two, for the purpose of this comparison I am going to use the most common 3D NAND based NVMe in vSAN Ready node configurations, the 1.6TB Intel P4610 NVMe drive, and the 375GB P4800X Intel Optane™ SSD, both of these SSDs are NVMe based devices.

Let’s compare the two devices:

1.6TB Intel P4610375GB P4800X Intel Optane™ P4800X
Capacity1.6TB375GB
Sequential Read (up to)3200 MB/Sec2400 MB/Sec
Sequential Write (up to)2080 MB/Sec2000 MB/Sec
Random Read (100% Span)643000 IOPS550000 IOPS
Random Write (100% Span)199000 IOPS500000 IOPS
Latency Read77 µs10 µs
Latency Write18 µs10 µs
Endurance Rating12.25PBW or around 3.5 DWPDUp to 60 DWPD or around 41PBW
Data obtained from ark.intel.com

As you can see from the above table there are some major differences between the two different SSD’s notably the Random Write performance which is critical in the cache tier in a vSAN environment as all the incoming writes are random in nature and are absorbed by the cache tier, the NAND based SSD does not have as much capability around the random writes versus the Optane™ SSD, but the biggest impact to a vSAN Cache tier is the Drive Writes Per Day (DWPD), if you look at the specifications in detail, the P4610 can handle around 3.5 DWPD which equates to around 5.6TB of data written daily, whereas the 375GB Optane™ SSD can handle up to 60 DWPD which equates to 15TB of data written daily, remember that the Optane™ SSD is also less than a quarter of the capacity of the P4610, so in a vSAN environment cache tier, the Optane™ SSD wins hands down from an endurance perspective as well as the abaility to handle the random writes a lot quicker, so why such a difference?

Well if you look at NAND based SSDs, firstly there is usally an element of DRAM that acts as a buffer to the NAND media which is usually around 1GB of DRAM for every TB of media, so any incoming writes hit the DRAM buffer first, this can be a positive boost in short, low block size write bursts, but cannot be sustained over a longer period of time, in an Optane™ SSD there is no such DRAM buffer so the data is being written to directly to the media. The VxRAIL team at Dell EMC have done some extensive testing around this and clearly demonstrated that a NAND based SSD cannot sustain the same level of write performance in a continuous fashion whereas the Optane™ SSD maintains the same level of write performance consistently, below is the results of their performance testing:

https://www.esg-global.com/validation/esg-technical-validation-dell-emc-vxrail-with-intel-xeon-scalable-processors-and-intel-optane-ssds

The way NAND based SSDs and Optane™ SSDs perfrom write operations is fundamentally different, in everybody’s NAND, media has to be read and written in pages, but everything has to be erased in blocks. Page updates are typically written to a new unused block, as new data is written, old pages become stale, and on an SSD these stale pages can build up fairly quickly which means at some point there a significant chunks of blocks that are obsolete, this then has to be garbage collected. This will then clear the block and allow that block to receive data, and the process starts all over again.

Optane™ SSDs are transistor-less which essentially means that each cell state can be changed from a 0 or 1 independently of other cells on the device. This means that Optane™ SSDs are completely bit addressable as opposed to having to write in pages, there is also no garbage collection required, and this obviously has a positive impact on performance as well as endurance which is why Optane™ SSDs have very high endurance capabilities.

So what does all this mean from an application perspective?
Well the VxRAIL Guys at Dell EMC also did some performance testing using Hammer DB and shown some significant performance gains when using Intel Optane™ SSDs versus traditional NAND as Cache as much as a 61% gain in performance in a complex OLTP workload

As we all know latency is critical in any type of workloads, what I have seen in performance testing is that Intel Optane™ SSDs consistently provide lower latency as well as a much more tightly controlled standard deviation on latency versus the P4610, even though in some smaller block size tests the performance of both devices was similar, in larger block size tests the Optane™ SSD again delivered lower latency and tightly controlled standard deviation in latency but also provided a much higher performance in comparison to the P4610. You also have to remember that the P4610 device was only using 37% Span due to vSAN Currently having a limit of 600GB write buffer per disk group, whereas the Optane™ SSD was using 100% Span, so the P4610 had a bit of an unfair advantage here.

Conclusion
What is clear from a vSAN perspective, endurance plays a critical role in the vSAN cache tier, in the very early days of vSAN there was no other choice but SAS or SATA based NAND devices with a ranging DWPD of between 10 and 25 based on an 800GB Drive, but as the technology evolution pushes the boundaries of performance and endurance, technology like Intel Optane™ SSDs clearly have an edge offering up to 60 DWPD on a smaller capacity of 375GB.

Smaller cache device…are you serious?

In the testing I have done on full NVMe systems where Intel Optane™ SSDs are being used in the vSAN Cache Tier, and standard more read-intensive NVMe drives like the Intel P4510 are being used in the capacity tier, a 375GB Optane™ SSD is more than sufficient, in most workloads a 750GB Optane™ SSD did not improve performance, even with 375GB I was only able to saturate the write buffer by 60% (based on vSAN 6.7 Update 3).

So whilst NAND based devices are fully supported as a vSAN cache device, they may not be the right choice when it comes down to consistent performance and endurance required for a modern infrastructure.

How intel optane and nvme helps with node consolidation

As the core density increases on a CPU it opens up the opportunity to consolidate the number of nodes required in any given cluster, but in a vSAN cluster, node consolidation has a negative effect on available IOPS, if you think about how each node provides a specific amount of IOPS, lowering the number of hosts in the cluster removes the IOPS capability of the nodes you are consolidating by, take the following for example:

Number of VMs : 200
vCPU Per VM : 4
Virtual Memory per VM : 32GB
Storage per VM : 600GB
vCPU to Core Ratio : 4 to 1

Now for the purpose of this sizing excersize I am going to use the vSAN Sizing tool and apply some cluster settings as per below:

So in the above scenario, the number of cores per CPU is 18, and I want to ensure that this is a two disk group configuration, if we then input the workload details as per below:

You will see when we click on recommendation that it shows a required node count of 8 (not taking into account any N+1 capability as we left that as 0 for the purpose of this sizing)

And we can see the disk config below:

However, if we increase the number of CPU Cores to 20 by clicking on the “+” in the sizing output we can see that it changes the number of nodes

And again if we increase the number of cores again to 22 we get a further reduction in the number of nodes to 6

The sizing tool will dynamically increase or decrease the number of disks required per host as well as the RAM per node that is required as you can see here:

But one thing we have not factored in here is the decrease in IOPS Capability that reducing by two nodes , if say for example each node was capable of 80K IOPS, reducing the node count by two means you have just lost 160K IOPS Capability, so what can we do to mitigate that?

Well instead of using SAS/SATA SSDs in your vSAN design, you could opt to use Intel Optane for Cache, and NAND based NVMe drives for capacity.
For write operations, Intel Optane greatly improves on write performance as I have written about before, but also read performance is greatly accelerated because the capacity devices are NVMe, so therefore reducing your node count by two in this case and utilising this kind of technology means you still get similar levels of performance, the best part is, the overall solution will cost you less too, so your TCO comes down which is good for your finance department right?

One question I get asked frequently is what size Optane device is sufficient?

Well in all of my testing, I very rarely saturated the write buffer even with 375GB Optane drives as cache devices, the reason for this is because vSAN starts to perform de-staging from the cache tier to the capacity tier when the write buffer becomes around 30% Full, and because the capacity tier it NVMe based, the de-staging happens a lot quicker, especially since vSAN 6.7 U3 where the de-stage limits have been removed.

So when would a 750GB Optane be useful?

High write intensive workloads such as Video Surveilance and Databases, or when your capacity disks are much slower, Optane can still be used in vSAN Configurations where the Capacity Tier is SAS/SATA which of course are not as fast as most NVMe devices so the write buffer can get more full.

So just to re-cap, you can save money on your vSAN deployments by consolidating hosts with higher core count CPUs as well as leveraging newer technology such as Intel Optane in the Cache Tier and NVMe in the capacity tier thus saving money whilst maintaining same level of performance or better, what’s not to like?

Why vSAN is the storage of choice for VMware Cloud Foundation

Recently VMware annouced that Cloud Foundation would support external storage connectivity, when I first heard this I thought to myself are VMware out of their minds? Amongst customers who I meet and talk to more or less on a daily basis, I also spoke to a lot of customers at VMworld about this but the stance or direction from VMware has not changed, if you want a truly Software Defined Data-Center then vSAN is the storage of choice.

So why have VMware decided to allow external storage in Cloud Foundation?

Many customers who are either about to start or have already started their journey to a Software Defined Data Center and/or Hybrid Cloud are still using existing assets that they wish to continue to sweat out, it may well be that their traditional storage is only a couple of years old and VMware are not expecting customers to simply throw out existing infrastructure that was either recently purchased or still being sweat out as that would be a tall ask right?

Customers also still have workloads that can’t simply be migrated to the full SDDC, they take some time to plan the migration, maybe there is a re-architecture of the workload that needs to be performed, or maybe there’s a specific need where a traditional storage array has to be used until any obstacles have been overcome, VMware recognises this hence the support for external storage in Cloud Foundation.

There are also specific use cases where Hyper Converged powered by vSAN isn’t an ideal fit, cold storage or archive storage is one of these, so supporting an array that can provide a suplemental storage architecture to meet these requirements is also a plus point.

Will I lose any functionality by leveraging traditional storage with Cloud Foundation? Simple answer is yes for the following reasons:

  • No lifecycle management for external storage through SDDC Manager which means patching and updating of the storage is still a manual task.
  • No single pane of glass management out of the box for external storage without installing third party plugins which in my experience have a tendancy to break other things.
  • No automation of stretched cluster deployment on external storage, all the replication etc has to be configured manually.
  • Day-2 operations such as Capacity Reporting, Health Check and Performance Monitoring are lost without installing any third party software for the external storage.
  • Auto deployment of the storage during a Workload Domain deployment, all the external storage has to be prepared beforehand.
  • Losing the true “Software Defined Storage” aspect and granular object control, currently external storage support for VVOLS is not there right now either.

Also remember that with Cloud Foundation, the Management Workload Domain has to run on vSAN, you can not use external storage for this.

So if you want a truly Software Defined Data Center with all the automation of deployment, all the nice built in features for day-2 operations then vSAN is the first choice of storage, if you have existing storage you wish to sweat out whilst you migrate to a full SDDC then that’s supported too, and yes you can have a vSAN cluster co-exist with external storage too, which makes migrations so much easier.

The other aspect to look at this is in a Hybrid Cloud or Multi Cloud strategy, Cloud Foundation is all about providing a consistent infrastructure between your private and public clouds, so if your public cloud is using Cloud Foundation with vSAN then the logical choice is to have vSAN as your storage for your Cloud Foundation Private Cloud to have that consistent infrastructure for your workloads no matter where they may run.