Tag Archives: Memory

What’s happening with Intel Optane?

I have done a lot of testing on Optane SSDs in the past, but in July of 2022 Intel announced their intention to wind down the Optane business. Since that announcement I have had many questions surrounding Optane and where it leaves customers today.

Well firstly, I am going to address the messaging that was announced back in July, on the Intel earnings call it was announced that Optane had been written off with over half a billion dollars. This led to quite a storm of confusion as I was asked by many “Does this mean I cannot buy Optane any more?”

To the contrary, Optane is still a product and will continue to be a product until at least the end of 2025, and even if you buy it on the last day it is available, you will still get a 5 year warranty.

I have never really spoken about the other side of the Optane house on this blog before, moreso because it wasn’t directly relevant to vSAN. However, there are two sides to Optane, of course as you know the SSD, but there is also the persistent memory side of the Optane Technology.

Optane Persistent Memory (PMEM) is primarily used in VMware as a memory tiering solution. Over the past few years DRAM has become expensive, as well as having the inability to scale. Memory tiering allows customers to overcome both of the challenges on cost as well as large capacity memory modules. PMEM for example is available in 128GB, 256GB and 512GB modules, at a fraction of the cost of the same size modules of DRAM.

Memory tiering is very much like the Original Storage Architecture in vSAN, you have an expensive cache tier, and a less expensive capacity tier. Allowing you to deliver a higher memory capacity with a much improved TCO/ROI. Below are the typical configurations prior to vSphere 7.0U3.

On the horizon we have a new architecture called Compute Express Link (CXL), and CXL 2.0 will deliver a plethora of memory tiering devices. However, CXL 2.0 is a few years away, so the only memory tiering solution out there for the masses is Intel Optane. This is how it looks today and how it may look with CXL 2.0:

I recently presented at the VMUG in Warsaw where I had a slide that states Ford are discontinuing the Fiesta in June 2023, does this mean you do not go and buy one of these cars today? The simple answer is just because it is going away in the future, it still meets the needs of today. It is the same with Optane Technology, arguably it is around for longer than the Ford Fiesta, but it meets the needs to reduce costs today as a bridge to memory tiering architectures of the future with CXL 2.0.

I like to challenge the status quo, so I challenge you to look at your vSphere, vSAN or VCF environments and look at two key metrics. The first one is “Consumed Memory” and the second one is “Active Memory”. If you divide Consumed by Active and the number you get is higher then 4, then memory tiering is a perfect fit for your environment, and not only can you save a lot of your memory cost, but it also allows you to push up your CPU core count because it is a more affordable technology.

Providing your “Active” memory sits within the DRAM Cache, there should be very little to no performance impact, both Intel and VMware have done extensive testing on this.

Proof of Concepts
Nobody likes a PoC, they take up far too much of your valuable time, and time is valuable. I have worked with many customers where they have simply dropped in a memory tiering host into their existing all DRAM based cluster and migrated real workloads to the memory tiered host. This means no synthetic workloads, and the workloads you migrate to evaluate can simply be migrated back.

Conclusion
Optane is around for a few years yet, and even though it is going to go away eventually, the benefits of the technology are here today, in preparation for the architectures of the future based on CXL 2.0. Software designed to work with memory tiering will not change, it is the hardware and electronics that will change, so it protects the investment in software.

Optane technology is available from all the usual vendors, Dell, HPE, Cisco, Lenovo, Fujitsu, Supermicro are just a few, sometimes you may have to ask them for it, but as they say….”If you do not ask, you do not receive”.

Why HCI Matters in the Datacenter and Beyond

Technology is changing and evolving at an ever-increasing pace, whether you are a consumer of electronics, or you are a CEO of a large organisation with a large IT infrastructure, the changes in technology affect us all in different ways.  An example of this is CPUs and Flash Storage, we’re now at an era of constantly increasing CPU Core densities, and Flash Storage is becoming bigger and faster, these technology transformations are not only changing the way we operate as human beings in our own personal IT bubbles at home, but also within organisations too.

As organisations large and small take on the whole business transformation, a key element of the business transformation is their IT, whilst the last 15 years IT was more focused around being IT centric with traditional applications and the wide adoption of the internet.  The next 15 years poses some challenges as IT becomes more business centric along with cloud applications and the Internet of Everything.

A key enabler to the whole IT transformation is the Software Defined Data Center, many of you would have heard me talk about the Software Defined Data Center not as an object, but more as an Operating System that runs your IT infrastructure, if you are asked what three things are required to run an operating system?   You’ll find yourself answering storage, compute and networking for connectivity, which is essentially the three key elements that make up the Software Defined Data Center.

Hyper-Converged Infrastructure allows you to deliver capabilities that underpin the whole Software Defined Data Center based on a standard x86 architecture and offers a building block approach, it also brings the storage closer to the CPU and Memory which in a virtualised environment is highly benefitial and it is more VM centric rather than being storage centric.

So why is HCI being adopted by the masses?

There are a number of reasons for this, we’ve already outlined the fact that having the storage closer to the compute delivers a much more efficient platform, but outside of that there is a Harware Evolution which is driving the changes in infrastructure, rather much like an Infrastructure Revolution.

Higher CPU Core densities means you can run much more dense workloads, in conjunction with this, RAM has become much comoditized, affordable and available in larger capacity.  From a storage aspect Flash has evolved in such a way that is has enabled the delivery of high capacity and high performing devices that only a few years ago would have took a whole refrigerator sized array to produce but now can be delivered by a device that you can hold in the palm of your hand.  Another aspect from the storage side of things is that traditional storage is unable to keep up with the demands of applications and IT, this resulted in a new approach to storage and infrastructure….HCI


What is required from your storage platform?

I have met with many customers in various meetings or events, and depending on who you talk to in the organisation you will get a different answer to that question

  • Application Owner – Performance and Scalability
    They need to deliver an application that performs well as well as offers scalability, so the storage has to be able to offer this.
  • Infrastructure Owner – Simplicity and Reliability
    They need the platform to be simple to deploy, simple to manage but also offer reliability, they don’t want to be getting calls in the middle of the night from the Application Owner!
  • CFO / Finance Team – Lower Cost and Operational Efficiency
    There’s always somebody looking at the numbers and it’s usually this side of the organisation, reducing TCO, CAPEX, OPEX and making IT more cost effective is the biggest driver here.

Everyone is aiming for that sweet spot where all three circles converge the only problem is, with traditional infrastructure, you can never satisfy all three of the above requirements, there’s usually one requirement that has to be sacrificed, and that’s usually the Finance Team or CFO that has to back down in order to deliver the requirements for the Application Owner and the Infrastructure Owner,  this is where HCI is different, HCI brings everyone to that central convergence and meets the goals of all the requirements, so now everyone is happy, lets take a closer look at how HCI powered by vSAN meets these requirements


vSAN HCI delivers an architecture that not only delivers on performance but it is scalable simply by adding more nodes, or by adding more storage, it also allows for linear scaling of performance.  This means as your IT or business applications scale and demand more capacity or performance, then this is easily delivered in whatever increments meet the requirements at that point in time.


vSAN HCI allows the infrastructure team to deploy and manage environments at a simple management plane in a single interface, no separate management tools are required which means there’s no extensive retraining of staff required.  Reliability and Resiliency are built in with the ability to protect from a Disk Level all the way up to a Site level.


We’ve already talked about how HCI offers a building block approach, this means environments can be built to meet your requirements now and be grown as and when required.   Because there’s a much simpler management plane, this means operational efficiencies come into play as well, offering a more streamlined approach to IT

At this point we have met all of the criteria set by the three key stakeholders, but the benefits of HCI don’t just stop there there are other positive impacts that HCI brings to your organisation:


vSAN HCI offers a much wider of choice in the hardware that can be used along with different hardware vendors to choose from, there is also the range of different deployment options, this allows organisations to have a lot more flexibility on how they adopt HCI as well as having choices for newer hardware technology at their fingertips, this includes:

  • vSAN Ready nodes from all major server OEM vendors to suit all performance and capacity requirements
  • Turnkey appliance solution from Dell EMC which is VxRAIL
  • VMware Cloud Foundation which incorporates a full SDDC Stack

For deployment options, vSAN HCI offers the following:

  • Standard clusters up to 64 Nodes
  • Remote Office / Branch Office (ROBO) Solutions for customers with multiple sites
  • Stretched Cluster Solutions
  • Disater Recover Solutions
  • Rack Level Solutions
  • Same Site “Server Room” configurations


vSAN HCI allows organisations to become more agile by allowing  faster deployments, faster procurement and giving more control back to the business, which in a competitive world is a key enabler to success

As you can see, no matter what the size of your IT infrastructure is, HCI brings a wealth of benefits, from large scale data center deployments, to multi site ROBO deployments, there’s a perfect fit for HCI