Tag Archives: NVMe

Full NVMe or not Full NVMe, that is the question

Image result for nvme logo

As you have seen, my recent posts have been around Intel Optane and the performance gains that can be delivered by implementing the technology into a vSAN environment. I have been asked many times about what benefits a full NVMe solution would bring and what such a solution would look like, but before we go into that, let’s talk about NVMe, what exactly is NVMe?

Non-Volatile Memory Express (NVMe) is not a drive type, but more of an interface and protocol solution that looks like is set to replace the SAS/SATA interface. It encompasses a PCIe controller and the whole purpose of NVMe is to exploit the parallelism that flash media provides which in turn reduces the I/O overhead and thus improve performance. As SSDs become faster, protocols like SAS/SATA which were designed for slower hard disks where the delay between the CPU request and data transfer was much higher, the requirement for faster protocols become evident, and this is where NVMe comes into play.

So in a vSAN environment, what does a full NVMe solution look like? Because vSAN is currently a two tier architecture (Cache and Capacity) a full NVMe solution would mean that both tiers have to have NVMe capable drives and this can be done with either all Standard NVMe drives in both cache and capacity, or using a technology like Intel Optane NVMe as the Cache and Standard NVMe as capacity. So from an architecture perspective it is pretty straight forward, but how does performance compare, for this I persuaded my contacts at Intel to provide me some Full NVMe kit in order to perform some benchmark tests, and in order to provide a like for like comparison, I ran the same benchmark tests on an Optane+SATA configuration.

Cluster Specification:
Number of Nodes: 4
Network: 2x 10gbit in LACP configuration
Disk groups per node: 2
Cache Tier both clusters: 2x Intel Optane 375GB P4800X PCIe Add In Card
Capacity Tier Optane/SATA: 8x 3.84TB SATA S4510 2.5″
Capacity Tier Full NVMe: 8x 2.0TB NVMe P4510 2.5″ U.2

Test Plan:
Block Size: 4K, 8K, 16K, 32K, 64K, 128K
I/O Pattern: Random
Read/Write Ratio: 0/100, 30/70, 70/30, 100/0
Number of VMs: 120
Number of VMDKs per VM: 1
Size of VMDK: 50GB
Storage Policy: FTT=1, RAID1

Let’s look at the results:

And if you want the numbers:

So what is clear here that Optane serves really well in the cache tier in both solutions, however in the Full NVMe solution read performance is significantly improved also, in the 128K, 100% read test the 2x10G Links were being pushed to their limits, but not only was we able to push up throughput and IOPS but we also drove down latency, in some cases reducing it by over 50%.

So why would you choose a full NVMe solution? The simple answer here is if you have applications that are latency sensitive then having clusters dedicated to those applications would be adequately provided for from an IOPS, Throughput and Latency perspective with Full NVMe.

Vendors have also recognised this, for example Dell EMC have just launched their Intel Optane Powered Full NVMe vSAN Ready node, based on the R740xd platform and consists of similar drives to what I have used in the tests here being the Optane 375GB and P4510 U.2 NVMe drives, you can see the vSAN ready node details here

So clearly NVMe has major performance benefits over traditional SAS/SATA devices, could this be the end of SAS/SATA in the not so distant future?

Optane Performance

Many times over the past few months I have been asked about the benefits of using Intel Optane NVMe in a vSAN environment, although there was marketing material from Intel that boasted a good performance boost I decided (purely out of curiosity) to do some performance benchmarking and compare Optane as the cache devices versus SAS as the cache devices. The performance benchmark test used exactly the same servers and networking in order to provide a level playing field, the only thing that was changed was the cache devices being used in the disk groups.

Server Specification:

  • 6x Dell PowerEdge R730xd
  • Intel Xeon CPU E5-2630 v3 @ 2.40GHz
  • 128GB RAM
  • 2x Dell PERC H730 Controllers
  • 2x Intel Dual Port 10Gb ethernet adapters (Configured with LACP)

Disk group config for the SAS test:

  • 3x Disk Groups
  • 3x 400GB SAS SSD per disk group
  • 1x 400GB SAS SSD per disk group

Disk group config for the Optane test

  • 2x Disk Groups
  • 3x 400GB SAS SSD per disk group
  • 1x 750GB Optane NVMe P4800X per disk group

Whilst you could say that the configurations are not identical, since the Write Buffer is limited to 600GB per disk group then both configurations have the same amount of write buffer, the SAS config has more backend disks which would serve as an advantage.

For the purpose of the Benchmark, we used HCI Bench to automate the Oracle VDBench workload testing and each test was based on the following, the test was designed to max-out the system hence the high number of VMDKs used here (250)

  • 50 Virtual Machines
  • 5 VMDKs per virtual machine
  • 2 threads per VMDK
  • 20% working set
  • 4k, 8k, 16k, 32k, 64k and 128k block size
  • 0%, 30%, 70%, 100% write workload
  • 900 second test time for each test

So what were the results?

4K Blocksize:

8K Blocksize:

16K Blocksize:

32K Blocksize:

64K Blocksize:

128K Blocksize:

As you can see Optane really did boost the performance even though the server platform wasn’t the ideal platform for the Optane devices (Dell said those cards will not be certified in the 13G platform), however despite the fact that the workload was designed to max-out the system, in some cases latency was reduced to almost a third and throughput was was increased in some cases to 3x.

Conclusion: Optane really does live up to expectations, and it isn’t just marketing, I have yet to test a full NVMe system to see how much it can really be pushed, but I hope the numbers above go someway to convice you why you should consider optane as the cache tier in vSAN.

Sizing for your workloads

When sizing a vSAN environment there are many considerations to take into account, and with the launch of the new vSAN Sizing tool I thought I would take time and write up what questions I commonly ask people in order to get an understanding of what they want to run on vSAN as well as a scope of requirements that meet that workload.

Capacity
Obviously capacity is going to be our baseline for any sizing activity, no matter what we achieve with the other requirements, we have to meet a usable capacity, remember we should always work off a usable capacity for any sizing, a RAW capacity does not take into account any Failure Tolerance Methodology, Erasure Coding or Dedupe/Compression, this is something we will cover a bit later in this article.

Capacity should also include the required Swap File space for each of the VMs that the environment is being scoped for.

IOPS
I have been involved in many discussions where it is totally unknown what the performance requirements are going to be, so many times I have been told “We want the fastest performance possible” without being told what the current IOPS requirement is, to put it into context what is the point in buying a 200mph sports car when the requirement is to drive at 70mph max!

IOPS requirement plays a key part in determining what level of vSAN Ready Node specification is required, for example if a total IOPS requirement is 300,000 IOPS out of a 10 Node cluster, is there much point spending more money on an All-Flash configuration that delivers 150,000 IOPS Per node?  Simple answer…No!  You could opt for a lower vSAN All-Flash Ready Node config that meets the requirements a lot closer and still offer
room for expansion in the future.

Workload Type
This is a pretty important requirement, for example if your workload is more of a write intensive workload then this would change the cache requirements, it may also require a more write intensive flash technology such as NVMe for example.  If you have different workload types going onto the same cluster it would be worthwhile categorizing those workloads into four categories:-

  • 70/30 Read/Write
  • 80/20 Read/Write
  • 90/10 Read/Write
  • 50/50 Read/Write

Having the VMs in categories will allow you to specify the workload types in the sizing tool (in the advanced options).

vCPU to Physical Core count
This is something that gets overlooked not from a requirement perspective, but people are so used to sizing based on a “VM Per Host” scenario which with the increasing CPU core counts does not fit that model any more, even the new sizing tool bases it on vCPU to Physical Core ratio which makes things a lot easier, most customers I Talk to who are performing a refresh of servers with either 12 or 14 core processors can lower the amount of servers required by increasing the core count on the new servers, thus allowing you to run more vCPUs on a single host.

List of questions for requirements for each workload type

  • Average VMDK Size per VM
  • Average number of VMDKs per VM
  • Average number of vCPU per VM
  • Average vRAM per VM
  • Average IOPS Requirement per VM
  • Number of VMs
  • vCPU to Physical Core Ratio

RAW Capacity versus Usable Capacity, how much do I actually need?
The new sizing tool takes all your requirements into account, even the RAID levels, Dedupe/Compression ratios etc and returns with a RAW Capacity requirement based on the data you enter, if you are like me and prefer to do it quick and dirty, below is table showing you how to work out based on a requirement of 100TB of usable (Including Swap File Space), based on a standard cluster with no stretched capacilities it looks like this:

FTT LevelFTT MethodMin number of hostsMultiplication FactorRAW Capacity Based on 100TB Usable
FTT=0NoneN/A1x100TB
FTT=1Mirror32x200TB
FTT=2Mirror53x300TB
FTT=3Mirror74x400TB
FTT=1RAID541.33x133TB
FTT=2RAID661.5x150TB

Now in vSAN 6.6 VMware introduced localized protection (Secondary FTT) and the ability to include or not include specific objects from the stretched cluster (Primary FTT), below is a table showing what the RAW Capacity requirements are based on the two FTT levels

Primary FTT LevelSecondary FTT LevelSecondary FTT MethodMin Number of hosts per siteRAW Capacity Based on 100TB Usable
PFTT=1SFTT=0RAID01200TB
PFTT=1SFTT=1RAID13400TB
PFTT=1SFTT=1RAID54266TB
PFTT=1SFTT=2RAID66300TB

Mixed FTT levels and FTT Methods
Because vSAN is truly a Software Defined Storage Platform, this means that you can have a mixture of VMs/Objects with varying levels of protection and FTT Methods, for example for Read intensive workloads you may choose to have RAID 5 in the storage policy, and for more write intensive workloads a RAID 1 policy, they can all co-exist on the same vSAN Cluster/Datastore perfectly well, and the new sizing tool allows you to specify different Protection Levels and Methods for each workload type.