Tag Archives: P5800X

What’s happening with Intel Optane?

I have done a lot of testing on Optane SSDs in the past, but in July of 2022 Intel announced their intention to wind down the Optane business. Since that announcement I have had many questions surrounding Optane and where it leaves customers today.

Well firstly, I am going to address the messaging that was announced back in July, on the Intel earnings call it was announced that Optane had been written off with over half a billion dollars. This led to quite a storm of confusion as I was asked by many “Does this mean I cannot buy Optane any more?”

To the contrary, Optane is still a product and will continue to be a product until at least the end of 2025, and even if you buy it on the last day it is available, you will still get a 5 year warranty.

I have never really spoken about the other side of the Optane house on this blog before, moreso because it wasn’t directly relevant to vSAN. However, there are two sides to Optane, of course as you know the SSD, but there is also the persistent memory side of the Optane Technology.

Optane Persistent Memory (PMEM) is primarily used in VMware as a memory tiering solution. Over the past few years DRAM has become expensive, as well as having the inability to scale. Memory tiering allows customers to overcome both of the challenges on cost as well as large capacity memory modules. PMEM for example is available in 128GB, 256GB and 512GB modules, at a fraction of the cost of the same size modules of DRAM.

Memory tiering is very much like the Original Storage Architecture in vSAN, you have an expensive cache tier, and a less expensive capacity tier. Allowing you to deliver a higher memory capacity with a much improved TCO/ROI. Below are the typical configurations prior to vSphere 7.0U3.

On the horizon we have a new architecture called Compute Express Link (CXL), and CXL 2.0 will deliver a plethora of memory tiering devices. However, CXL 2.0 is a few years away, so the only memory tiering solution out there for the masses is Intel Optane. This is how it looks today and how it may look with CXL 2.0:

I recently presented at the VMUG in Warsaw where I had a slide that states Ford are discontinuing the Fiesta in June 2023, does this mean you do not go and buy one of these cars today? The simple answer is just because it is going away in the future, it still meets the needs of today. It is the same with Optane Technology, arguably it is around for longer than the Ford Fiesta, but it meets the needs to reduce costs today as a bridge to memory tiering architectures of the future with CXL 2.0.

I like to challenge the status quo, so I challenge you to look at your vSphere, vSAN or VCF environments and look at two key metrics. The first one is “Consumed Memory” and the second one is “Active Memory”. If you divide Consumed by Active and the number you get is higher then 4, then memory tiering is a perfect fit for your environment, and not only can you save a lot of your memory cost, but it also allows you to push up your CPU core count because it is a more affordable technology.

Providing your “Active” memory sits within the DRAM Cache, there should be very little to no performance impact, both Intel and VMware have done extensive testing on this.

Proof of Concepts
Nobody likes a PoC, they take up far too much of your valuable time, and time is valuable. I have worked with many customers where they have simply dropped in a memory tiering host into their existing all DRAM based cluster and migrated real workloads to the memory tiered host. This means no synthetic workloads, and the workloads you migrate to evaluate can simply be migrated back.

Conclusion
Optane is around for a few years yet, and even though it is going to go away eventually, the benefits of the technology are here today, in preparation for the architectures of the future based on CXL 2.0. Software designed to work with memory tiering will not change, it is the hardware and electronics that will change, so it protects the investment in software.

Optane technology is available from all the usual vendors, Dell, HPE, Cisco, Lenovo, Fujitsu, Supermicro are just a few, sometimes you may have to ask them for it, but as they say….”If you do not ask, you do not receive”.

Do I need a Bigger Write Buffer?

Even since VMware published this article on cache sizing guidelines for all-flash I still get asked two questions, the first one is around the amount of cache required per node? The second is about when vSAN will have a larger write buffer?

The amount of cache required per node in all-flash is not dependant on the amount of usable space like it was in Hybrid configurations, the amount of cache per node is based purely on the endurance of the cache SSDs, which typically fall into four categories:

  • Up to 2 drive writes per day
  • 3 drive writes per day
  • 10 drive writes per day
  • 30 drive writes per day

With the birth of the P5800X from Intel having an endurance capability of 100 drive writes per day, I would expect a 5th category will appear soon too.

If we look at the amount of DWPD a drive is capable of we can see whether it would be good in a cache tier or not, for example a device with 0.4-2 DWPD is likely to be certified for the vSAN capacity tier and not the cache tier.

Since the cache tier is where 100% of the writes happen, this is where you need the higher endurance devices, the higher the writes in your environment means you need to look at the endurance as this will be the biggest factor in the amount of cache you need.

If we look at the 3 DWPD category, this is normally categorised by the vendors as “Mixed-Use”, and is the most economically priced cache device for vSAN, but because of the lower endurance, you actually need more cache. I have looked at a lot of Live Optics reports over the past few months to gather information on what is the average % of writes in a customer environment, and the number that came out was 37%, yes higher than the 30% normally envisaged.

So based on 3DWPD and >30% Random Writes, the VMware Article states you need 3.6TB of cache per node based on an AF-8 Config, so this would result in a likely configuration of 3x 1.6TB Devices:

The next category of 10 DWPD, these are usually classed as “Write Intensive” by the vendors, again according to the VMware table you would need 1.2TB of cache per node, again based on an AF-8 Config with >30% Random writes:

Then we come to the final category of 30 DWPD, these devices are usually categorised as “Write Intensive Express Flash”, and this is usually Intel Optane SSD Devices such as the P4800X, for the same workload, the VMware recommendation is to have 400GB of cache per node:

As you can see, the amount of cache you require is based on the endurance of the devices when it comes to vSAN all-flash.

To address the second question about vSAN ever having a larger write buffer, this has been mentioned for a long time, but my opinion here is that you do not need to have a larger write buffer if you are using high endurance devices, and with the new Intel P5800X having an endurance factor of 100 DWPD, I expect that the amount of cache per node would be lower still, so I would not expect a big emphasis on the write buffer from a vSAN perspective.

As SSDs become faster, more higher endurance, it mitigates the need to have larger write buffers, especially in Full NVMe configurations for example where the storage is sat on the PCIe BUS directly, rather than sat behind a disk controller. And in my experience with Intel Optane SSDs, the 375GB (P4800X) and 400GB (P5800X) serve very well even in write intensive environments.