VROC vs Traditional RAID Controller: Optimizing Boot Devices in VMware vSAN Ready Nodes

The world of data storage is constantly evolving, with technology advancements aiming to maximize performance and efficiency in data center operations. One of the recent innovations in storage technology is Intel’s Virtual RAID on CPU (VROC), which has gained traction among IT professionals and enthusiasts. This article will compare the use of VROC as a boot device to the more traditional RAID controller in a VMware vSAN Ready Node, highlighting the advantages and disadvantages of each approach.

VROC: The New Kid on the Block

Intel VROC is a hybrid software/hardware-based RAID solution that utilizes the CPU for RAID processing rather than a dedicated hardware RAID controller. VROC can be configured using NVMe SSDs (as well as SATA SSDs), offering high-performance storage with lower latency compared to traditional RAID controllers. Let’s dive into some of the advantages and disadvantages of using VROC as a boot device in a vSAN Ready Node.

Advantages of VROC

Performance: VROC allows for better performance and reduced latency by eliminating the need for a dedicated RAID controller. This results in faster data processing and retrieval, which is crucial in virtualized environments.

Scalability: With VROC, you can easily expand your storage capacity by adding NVMe SSDs without the need for additional RAID controllers. This enables seamless growth of your vSAN Ready Node as your storage needs increase.

Cost Savings: VROC can reduce the cost of your vSAN Ready Node by eliminating the need for additional RAID controllers. Furthermore, as a software-based solution, VROC can leverage existing hardware resources, resulting in lower capital expenditures.

Traditional RAID Controller: Tried and Tested

A traditional RAID controller is a dedicated hardware component responsible for managing storage arrays and ensuring data redundancy. These controllers have been widely used in data centers for decades, providing a reliable and stable solution for storage management. Here are some advantages and disadvantages of using traditional RAID controllers as boot devices in vSAN Ready Nodes.

Advantages of Traditional RAID Controllers

Familiarity: Traditional RAID controllers are well-known and widely understood by IT administrators, making them a comfortable and familiar choice for managing storage in vSAN Ready Nodes.

Hardware Independence: Unlike VROC, traditional RAID controllers do not tie you to specific hardware vendors, allowing for more flexibility in hardware selection.


Choosing between VROC and traditional RAID controllers for boot devices in vSAN Ready Nodes ultimately depends on your organization’s priorities and requirements. VROC offers better performance, scalability, and cost savings but comes with vendor lock-in and increased complexity. On the other hand, traditional RAID controllers provide familiarity and hardware independence but may fall short in terms of performance and cost-efficiency.

It is essential to carefully evaluate the specific needs of your environment before deciding which solution is best suited for your vSAN Ready Node. By considering factors such as performance, scalability, cost, and ease of management, you can make an informed decision that will optimize your VMware vSAN Ready Node for long-term success. As storage technologies continue to evolve, staying abreast of new developments, such as VROC, can help ensure your organization remains agile and well-prepared to adapt to the ever-changing data center landscape. Ultimately, the choice between VROC and traditional RAID controllers should be guided by a thorough understanding of your specific storage needs, allowing you to maximize performance and efficiency in your virtualized environment.

What’s happening with Intel Optane?

I have done a lot of testing on Optane SSDs in the past, but in July of 2022 Intel announced their intention to wind down the Optane business. Since that announcement I have had many questions surrounding Optane and where it leaves customers today.

Well firstly, I am going to address the messaging that was announced back in July, on the Intel earnings call it was announced that Optane had been written off with over half a billion dollars. This led to quite a storm of confusion as I was asked by many “Does this mean I cannot buy Optane any more?”

To the contrary, Optane is still a product and will continue to be a product until at least the end of 2025, and even if you buy it on the last day it is available, you will still get a 5 year warranty.

I have never really spoken about the other side of the Optane house on this blog before, moreso because it wasn’t directly relevant to vSAN. However, there are two sides to Optane, of course as you know the SSD, but there is also the persistent memory side of the Optane Technology.

Optane Persistent Memory (PMEM) is primarily used in VMware as a memory tiering solution. Over the past few years DRAM has become expensive, as well as having the inability to scale. Memory tiering allows customers to overcome both of the challenges on cost as well as large capacity memory modules. PMEM for example is available in 128GB, 256GB and 512GB modules, at a fraction of the cost of the same size modules of DRAM.

Memory tiering is very much like the Original Storage Architecture in vSAN, you have an expensive cache tier, and a less expensive capacity tier. Allowing you to deliver a higher memory capacity with a much improved TCO/ROI. Below are the typical configurations prior to vSphere 7.0U3.

On the horizon we have a new architecture called Compute Express Link (CXL), and CXL 2.0 will deliver a plethora of memory tiering devices. However, CXL 2.0 is a few years away, so the only memory tiering solution out there for the masses is Intel Optane. This is how it looks today and how it may look with CXL 2.0:

I recently presented at the VMUG in Warsaw where I had a slide that states Ford are discontinuing the Fiesta in June 2023, does this mean you do not go and buy one of these cars today? The simple answer is just because it is going away in the future, it still meets the needs of today. It is the same with Optane Technology, arguably it is around for longer than the Ford Fiesta, but it meets the needs to reduce costs today as a bridge to memory tiering architectures of the future with CXL 2.0.

I like to challenge the status quo, so I challenge you to look at your vSphere, vSAN or VCF environments and look at two key metrics. The first one is “Consumed Memory” and the second one is “Active Memory”. If you divide Consumed by Active and the number you get is higher then 4, then memory tiering is a perfect fit for your environment, and not only can you save a lot of your memory cost, but it also allows you to push up your CPU core count because it is a more affordable technology.

Providing your “Active” memory sits within the DRAM Cache, there should be very little to no performance impact, both Intel and VMware have done extensive testing on this.

Proof of Concepts
Nobody likes a PoC, they take up far too much of your valuable time, and time is valuable. I have worked with many customers where they have simply dropped in a memory tiering host into their existing all DRAM based cluster and migrated real workloads to the memory tiered host. This means no synthetic workloads, and the workloads you migrate to evaluate can simply be migrated back.

Optane is around for a few years yet, and even though it is going to go away eventually, the benefits of the technology are here today, in preparation for the architectures of the future based on CXL 2.0. Software designed to work with memory tiering will not change, it is the hardware and electronics that will change, so it protects the investment in software.

Optane technology is available from all the usual vendors, Dell, HPE, Cisco, Lenovo, Fujitsu, Supermicro are just a few, sometimes you may have to ask them for it, but as they say….”If you do not ask, you do not receive”.

Enabling RDMA for vSAN with intel e810 adapter

The Intel E810 network adapter is now fully certified for RDMA support in vSAN, I thought I would try it out and see what performance improvement I would get by enabling it. However I found that just installing the drivers is not enough to enable RDMA on the adapter itself.

At the time of writing this article, the driver versions that have been certified are as follows:

  • icen version
  • irdman version
  • E810 firmware 2.40

After installing the above drivers, I did not see any RDMA adapters listed in the vSphere UI:

So it would appear that the driver module has to be told to switch on RDMA, in order to do this you run the following two commands:

esxcli system module parameters set -m icen -p "RDMA=1,1"
esxcli system module parameters set -m irdman -p "ROCE=1,1"

The above two commands enable RDMA at the driver level, and then the version of RDMA at the RDMA driver level, for both ports. After a reboot of the host, you should now see an option in the UI for RDMA adapters:

Now going into the vSAN Services under network, you can now enable RDMA for your vSAN cluster:

In the networking section it should now show that RDMA Support is Enabled:

Now that RDMA is enabled there should be a performance boost due to the offload capabilities that RDMA offers. I will post some results as soon as my test cycles have completed.

It's all about VMware vSAN