Tag Archives: vROPS

What’s happening with Intel Optane?

I have done a lot of testing on Optane SSDs in the past, but in July of 2022 Intel announced their intention to wind down the Optane business. Since that announcement I have had many questions surrounding Optane and where it leaves customers today.

Well firstly, I am going to address the messaging that was announced back in July, on the Intel earnings call it was announced that Optane had been written off with over half a billion dollars. This led to quite a storm of confusion as I was asked by many “Does this mean I cannot buy Optane any more?”

To the contrary, Optane is still a product and will continue to be a product until at least the end of 2025, and even if you buy it on the last day it is available, you will still get a 5 year warranty.

I have never really spoken about the other side of the Optane house on this blog before, moreso because it wasn’t directly relevant to vSAN. However, there are two sides to Optane, of course as you know the SSD, but there is also the persistent memory side of the Optane Technology.

Optane Persistent Memory (PMEM) is primarily used in VMware as a memory tiering solution. Over the past few years DRAM has become expensive, as well as having the inability to scale. Memory tiering allows customers to overcome both of the challenges on cost as well as large capacity memory modules. PMEM for example is available in 128GB, 256GB and 512GB modules, at a fraction of the cost of the same size modules of DRAM.

Memory tiering is very much like the Original Storage Architecture in vSAN, you have an expensive cache tier, and a less expensive capacity tier. Allowing you to deliver a higher memory capacity with a much improved TCO/ROI. Below are the typical configurations prior to vSphere 7.0U3.

On the horizon we have a new architecture called Compute Express Link (CXL), and CXL 2.0 will deliver a plethora of memory tiering devices. However, CXL 2.0 is a few years away, so the only memory tiering solution out there for the masses is Intel Optane. This is how it looks today and how it may look with CXL 2.0:

I recently presented at the VMUG in Warsaw where I had a slide that states Ford are discontinuing the Fiesta in June 2023, does this mean you do not go and buy one of these cars today? The simple answer is just because it is going away in the future, it still meets the needs of today. It is the same with Optane Technology, arguably it is around for longer than the Ford Fiesta, but it meets the needs to reduce costs today as a bridge to memory tiering architectures of the future with CXL 2.0.

I like to challenge the status quo, so I challenge you to look at your vSphere, vSAN or VCF environments and look at two key metrics. The first one is “Consumed Memory” and the second one is “Active Memory”. If you divide Consumed by Active and the number you get is higher then 4, then memory tiering is a perfect fit for your environment, and not only can you save a lot of your memory cost, but it also allows you to push up your CPU core count because it is a more affordable technology.

Providing your “Active” memory sits within the DRAM Cache, there should be very little to no performance impact, both Intel and VMware have done extensive testing on this.

Proof of Concepts
Nobody likes a PoC, they take up far too much of your valuable time, and time is valuable. I have worked with many customers where they have simply dropped in a memory tiering host into their existing all DRAM based cluster and migrated real workloads to the memory tiered host. This means no synthetic workloads, and the workloads you migrate to evaluate can simply be migrated back.

Conclusion
Optane is around for a few years yet, and even though it is going to go away eventually, the benefits of the technology are here today, in preparation for the architectures of the future based on CXL 2.0. Software designed to work with memory tiering will not change, it is the hardware and electronics that will change, so it protects the investment in software.

Optane technology is available from all the usual vendors, Dell, HPE, Cisco, Lenovo, Fujitsu, Supermicro are just a few, sometimes you may have to ask them for it, but as they say….”If you do not ask, you do not receive”.

Day-2 Operations – vSphere built in vROPS dashboards

At VMworld I ran a few sessions on Day-2 Operations which I also covered the new built in dashboards for vROPS which were introduced with the fully baked HTML5 client in vSAN/vSphere 6.7.  Many people were not aware of the dashboards and moreso were not aware that these vSAN specific dashboards continue to work even after the 90 day trial period has expired.  Not only that but VMware has done a great job in automating a deployment of vROPS if you have not already got an appliance deployed.  So let’s take a look at these new dashboards in a bit more detail.

Firstly the three vSphere operations Dashboards, the default one that loads up is an overview dashboard of all your clusters:

Then there is the Cluster Level View where you pick a specific cluster:

And finally is the Alerts view…Great timing, I have a physical disk failure in the vSAN Cluster:Now the above three dashboards will no longer be available after the 90 day trial period has expired, and also the link to the vRealize Operations appliance will not be functional either, but after the 90 day trial period has expired, the following three vSAN Dashboards will still be fully functional and available, so let’s look at those in a bit more detail:

The vSAN Overview dashboard, like the vSphere overview dashboard, displays information at a holistic level for all of your vSAN Clusters within this particular vCenter server, you will see that the dashboard provides information on how many clusters are running dedupe/compression, or how many of the clusters are a Stretched Cluster for example.  The dashboard also shows if you need to investigate any current alerts (yes I cleaned up the failed disk before grabbing this screenshot).

The next dashboard we choose a specific cluster to look at in more detail:
In this dashboard we see information pertaining to a specific cluster, we can see that I have 6 critical alerts which we will take a look at next, but there are some key metrics here that from an operational perspective are pretty important from a day-2 operations standpoint:

  • Remaining Capacity
  • Component Limit
  • IOPS, Throughput and Latency statistics
  • Read versus Write latency

The last dashboard is alerts:
Here we can see the current alerts that have been triggered for each cluster which may need to be addressed, the critical alerts previously highlighted in the cluster view were all related to a network redundancy lost when I was troubleshooting packet loss on the physical switch.

So as you can see there’s a really good amount of detail in the vSphere UI relating to vROPS making the day-2 operations a lot easier to perform.

 

 

 

 

Day-2 Operations – Performance Monitoring in the vSAN UI

Performance reporting or Performance Monitoring is something of a must in any storage environment today, I remember many times when I was in VMware Support when facing a customer storage performance issue that metrics were not there to capture the event, and most storage performance tools required enabling, obviously this meant that the issue had to be occurring at the time of the performance metrics grab, vSAN in the early days was no different, vSAN Observer whilst being a detailed tool and provided a lot of information, was not a historical tool, it was enabled to troubleshoot a performance issue that was happening at that particular time.

In the later releases of vSAN the UI came equipped with more performance metrics than you could shake a stick at, which from a performance troubleshooting and monitoring perspective is the dogs danglies, but what does this mean from an every day perspective?  Before we take a look at the UI, there are three areas where vSAN Performance metrics can be displayed

  • Cluster level – This is the performance metrics aggregated for the whole cluster and allow you to have the high level view of how your cluster is performing as a whole
  • Host Level – This allows you to look at how vSAN is performing on a host by host perspective and contains further information drilling down through things like Disk Groups, Physical Disks, Network Controllers, VMkernel interfaces.
  • VM Level – This focuses on a specific virtual machine and the objects associated with it.

So what information do we have exactly for a given observation level?  Well let’s first of all take a look at the cluster level performance information, there are three options under the performance tab for vSAN

  • vSAN – Virtual Machine Consumption
  • vSAN – Backend
  • vSAN – iSCSI

I have met many customers that immediately notice there is a big difference between the Virtual Machine Consumption and the Backend graphs, so before we go any further let’s talk about what each of these specific areas mean.

Virtual Machine Consumption
These graphs represent the values that objects residing on the vSAN Datastore are seeing, now remember everything that exists on vSAN is an object, so consumers that are counted in these graphs are Virtual Machines, Stats Objects etc

Backend
These graphs represent the backend disks associated with vSAN, cache and capacity

Both sets of graphs cover the statistics for:

  • IOPS
  • Throughput
  • Latency
  • Congestion
  • Outstanding I/O

iSCSI
The iSCSI Performance graphs contain all the graphs above with the exception of Congestion, these graphs are in relation to each iSCSI Target/LUN created and each one is selected in turn to review the performance graphs associated.

If we move our focus to a host level, in here we have a number of options in addition to the three we also see at a cluster level, however there is some additional metrics we get at a host level for Virtual Machine Consumption and Backend, in the Virtual Machine Consumption graph we have Local Client Cache Hit IOPS and Local Client Cache Hit Rate

And under Backend we also have some additional graphs for Resync IOPS, Resync Throughput and Resync LatencyThe resync metrics are extremely important if vSAN is recovering from a failure of some sort and performing a resync of degraded components, it is also important if you are performing a pro-active rebalance, policy change or a full data migration during host or disk/diskgroup evacuation.

The other options listed under host vSAN performance are:

  • Disk Group – Shows the performance graphs for the disk groups, I will cover this below as this is one of the most interesting set of metrics with a lot of detail.
  • Disk – Shows the physical disks in the host reporting on IOPS, Throughput and  Latency
  • Physical Adapters – Shows the network stats for each vmnic associated with vSAN, stats include Packet Loss Rate which is good for troubleshooting networking issues
  • VMkernel Adapters – Shows the statistics for each VMkernel configured for vSAN, this also includes a Packet Loss Rate which you can then use to troubleshoot the software network stack
  • VMkernel Adapters Aggregation – This is an aggregation of all VMkernel interfaces being used for vSAN on the host

Now let’s go back to the Disk Group performance graphs, as I said earlier this is a very interesting group of metrics to explore, so what do we have in this group?  The first section is all about Frontend (Guest) IOPS, Throughput and Latency

The frontend statistics are maybe as you have guessed already, they are related to vSAN Object I/O being generated from guests running within the vSAN cluster.  If we scroll down a little further we can see statistics relating to Overhead IO,  Read Cache Hit Rate (for hybrid) and Evictions:

Further down we have statistics relating to the vSAN Write Buffer and De-stage rate clearly showing how much of the write buffer is free and also how quickly data is being de-staged from cache to capacity, now we also have resync metrics under disk groups, however this differs slightly against the Cluster Wide Backend statistics, in the disk group graphs we actually have values that represent various aspects of the Resync Operations, the graph differentiates between:

  • Policy Changes
  • Repairs
  • Rebalance

So you can easily distinguish what resync operations are happening by the statistics within the disk group stats.

Collection Interval
The vSAN Performance metrics collect the sample every five minutes, and this is an average over that five minute period, if your cluster is hardly doing anything (like my cluster for the screenshots) then this can throw out some of the latency numbers, in my own cluster I have noticed it shows higher latency when doing practically nothing than it does when I start putting load on the cluster.  I have spoken to many customers about this, it is no concern, it just means that during the collection sample maybe a few “Large” IO operations returned a larger Latency and because of the low number of samples, this skews the average, so no cause for alarm on that one.

Up next will be Day-2 Operations, Performance Monitoring with vROPS, I just have to write it first 🙂