Tag Archives: HCI

VROC vs Traditional RAID Controller: Optimizing Boot Devices in VMware vSAN Ready Nodes

The world of data storage is constantly evolving, with technology advancements aiming to maximize performance and efficiency in data center operations. One of the recent innovations in storage technology is Intel’s Virtual RAID on CPU (VROC), which has gained traction among IT professionals and enthusiasts. This article will compare the use of VROC as a boot device to the more traditional RAID controller in a VMware vSAN Ready Node, highlighting the advantages and disadvantages of each approach.

VROC: The New Kid on the Block

Intel VROC is a hybrid software/hardware-based RAID solution that utilizes the CPU for RAID processing rather than a dedicated hardware RAID controller. VROC can be configured using NVMe SSDs (as well as SATA SSDs), offering high-performance storage with lower latency compared to traditional RAID controllers. Let’s dive into some of the advantages and disadvantages of using VROC as a boot device in a vSAN Ready Node.

Advantages of VROC

Performance: VROC allows for better performance and reduced latency by eliminating the need for a dedicated RAID controller. This results in faster data processing and retrieval, which is crucial in virtualized environments.

Scalability: With VROC, you can easily expand your storage capacity by adding NVMe SSDs without the need for additional RAID controllers. This enables seamless growth of your vSAN Ready Node as your storage needs increase.

Cost Savings: VROC can reduce the cost of your vSAN Ready Node by eliminating the need for additional RAID controllers. Furthermore, as a software-based solution, VROC can leverage existing hardware resources, resulting in lower capital expenditures.

Traditional RAID Controller: Tried and Tested

A traditional RAID controller is a dedicated hardware component responsible for managing storage arrays and ensuring data redundancy. These controllers have been widely used in data centers for decades, providing a reliable and stable solution for storage management. Here are some advantages and disadvantages of using traditional RAID controllers as boot devices in vSAN Ready Nodes.

Advantages of Traditional RAID Controllers

Familiarity: Traditional RAID controllers are well-known and widely understood by IT administrators, making them a comfortable and familiar choice for managing storage in vSAN Ready Nodes.

Hardware Independence: Unlike VROC, traditional RAID controllers do not tie you to specific hardware vendors, allowing for more flexibility in hardware selection.

Conclusion

Choosing between VROC and traditional RAID controllers for boot devices in vSAN Ready Nodes ultimately depends on your organization’s priorities and requirements. VROC offers better performance, scalability, and cost savings but comes with vendor lock-in and increased complexity. On the other hand, traditional RAID controllers provide familiarity and hardware independence but may fall short in terms of performance and cost-efficiency.

It is essential to carefully evaluate the specific needs of your environment before deciding which solution is best suited for your vSAN Ready Node. By considering factors such as performance, scalability, cost, and ease of management, you can make an informed decision that will optimize your VMware vSAN Ready Node for long-term success. As storage technologies continue to evolve, staying abreast of new developments, such as VROC, can help ensure your organization remains agile and well-prepared to adapt to the ever-changing data center landscape. Ultimately, the choice between VROC and traditional RAID controllers should be guided by a thorough understanding of your specific storage needs, allowing you to maximize performance and efficiency in your virtualized environment.

vSAN 7.0U1 – A New Way to Add Capacity

As we all know there are a number of ways of scaling capacity in a vSAN environment, you can add disks to existing hosts and scale the storage independently of compute, or you can add nodes to the cluster and scale both the storage and compute together, but what if you are in a situation where you do not have any free disk slots available, and / or you are unable to add more nodes to the existing cluster? Well vSAN 7.0U1 comes with a new feature called vSAN HCI Mesh, so what does this mean and how does it work?

Let’s take the scenario below, we have two vSAN clusters in the same vCenter, Cluster A is nearing capacity from a storage perspective, but the compute is relatively under utilised, there are no available disk slots to expand out the storage. Cluster B on the other hand has a lot of free storage capacity but is more utilised on the compute side of things:

Now the vSAN HCI Mesh will allow you to consume storage on a remote vSAN cluster providing it exists within the same vCenter inventory, there are no special hardware / software requirements (apart from 7.0U1) and the traffic will leverage the existing vSAN network traffic configuration.

This cool feature adds an elastic capability to vSAN Clusters, especially if you need to have some additional temporary capacity for application refactoring or service upgrade where you want to deploy the new services but keep the old one operational until the transition is made.

VMware has not left the monitoring capabilities of such use out either, in the UI you can monitor the usage of “Remote VM” from a capacity perspective as well as within the performance service

So this clearly allows dissagregation of storage and compute in a vSAN environment and offers that flexibility and elasticity of storage consumption are there any limitations?

  • A vSAN cluster can only mount up to 5 remote vSAN Datastores
  • The vSAN Cluster must be able to access the other vSAN cluster(s) via the vSAN Network
  • vSphere and vCenter must be running 7.0U1 or later
  • Enterprise and Enterprise Plus editions of vSAN
  • Enough hosts / configuration to support storage policy, for example if your remote cluster has only four hosts, you cannot use a policy which requires RAID6

So this is a pretty cool feature and sort of elliminates the need for Storage Only vSAN nodes which was discussed in the past at many VMworlds

Why vSAN is the storage of choice for VMware Cloud Foundation

Recently VMware annouced that Cloud Foundation would support external storage connectivity, when I first heard this I thought to myself are VMware out of their minds? Amongst customers who I meet and talk to more or less on a daily basis, I also spoke to a lot of customers at VMworld about this but the stance or direction from VMware has not changed, if you want a truly Software Defined Data-Center then vSAN is the storage of choice.

So why have VMware decided to allow external storage in Cloud Foundation?

Many customers who are either about to start or have already started their journey to a Software Defined Data Center and/or Hybrid Cloud are still using existing assets that they wish to continue to sweat out, it may well be that their traditional storage is only a couple of years old and VMware are not expecting customers to simply throw out existing infrastructure that was either recently purchased or still being sweat out as that would be a tall ask right?

Customers also still have workloads that can’t simply be migrated to the full SDDC, they take some time to plan the migration, maybe there is a re-architecture of the workload that needs to be performed, or maybe there’s a specific need where a traditional storage array has to be used until any obstacles have been overcome, VMware recognises this hence the support for external storage in Cloud Foundation.

There are also specific use cases where Hyper Converged powered by vSAN isn’t an ideal fit, cold storage or archive storage is one of these, so supporting an array that can provide a suplemental storage architecture to meet these requirements is also a plus point.

Will I lose any functionality by leveraging traditional storage with Cloud Foundation? Simple answer is yes for the following reasons:

  • No lifecycle management for external storage through SDDC Manager which means patching and updating of the storage is still a manual task.
  • No single pane of glass management out of the box for external storage without installing third party plugins which in my experience have a tendancy to break other things.
  • No automation of stretched cluster deployment on external storage, all the replication etc has to be configured manually.
  • Day-2 operations such as Capacity Reporting, Health Check and Performance Monitoring are lost without installing any third party software for the external storage.
  • Auto deployment of the storage during a Workload Domain deployment, all the external storage has to be prepared beforehand.
  • Losing the true “Software Defined Storage” aspect and granular object control, currently external storage support for VVOLS is not there right now either.

Also remember that with Cloud Foundation, the Management Workload Domain has to run on vSAN, you can not use external storage for this.

So if you want a truly Software Defined Data Center with all the automation of deployment, all the nice built in features for day-2 operations then vSAN is the first choice of storage, if you have existing storage you wish to sweat out whilst you migrate to a full SDDC then that’s supported too, and yes you can have a vSAN cluster co-exist with external storage too, which makes migrations so much easier.

The other aspect to look at this is in a Hybrid Cloud or Multi Cloud strategy, Cloud Foundation is all about providing a consistent infrastructure between your private and public clouds, so if your public cloud is using Cloud Foundation with vSAN then the logical choice is to have vSAN as your storage for your Cloud Foundation Private Cloud to have that consistent infrastructure for your workloads no matter where they may run.