Hello US/Global!

This website is also available in your country.

Skip to content
Insights + News/Expert Opinions

VMware and Broadcom acquisition: What it means for you and how you should optimize

Duan Van Der Westhuizen

Duan Van Der Westhuizen
SVP, Hybrid Cloud

An unvarnished overview of the situation and its implications

Back in May 2022, Broadcom signaled its intention to acquire VMware. The deal finally closed in November 2023, and promised a “dramatic simplification” of the VMware portfolio. A lot has happened since then, and Ensono has been working hard to reduce the impact to our clients and restructure our portfolio to offer a differentiated experience to Enterprise clients that is optimized to the new VMware bundled solution built on VMware Cloud Foundation.

It’s no secret that there is a lot of anxiety in the market, typical for these large change events in such a prolific technology area. A recent Gartner whitepaper summarized the fact that “Price” is by far the highest topic that companies are worried about as the new model rolls out in the market. This has now happened, and from April 1, 2024, these changes are in place and the switch to the new model is complete.

Some of the most impactful changes to the new model include:

  1. The perpetual license model is now end of sale – This means Enterprises who sign up for new VMware contracts, including at the end of current contracts, will leverage the new subscription-based licensing model. This is impactful to companies that counted on capitalization of software costs but may be welcomed by those who prefer an OpEx model.
  2. The cost drivers in the licensing model have changed – The move from the previous vRAM based model to the core-based model is changing the way you look at your environment. Now, the number of servers you have deployed directly increases the number of cores in your environment, which increases your license cost. Spare capacity needs to be scrutinized much more thoroughly. This isn’t a good vs bad situation, just a different way to look at your cost drivers.
  3. Bundles are the new normal – VMware simplified its portfolio from 168 products, bundles, and editions down to 2 main bundles with available add-ons. VMware Cloud Foundation (VCF) is the leading bundle, which includes both the core vSphere platform components and many software defined features such as NSX for networking, vSAN for storage, Aria Operations to operate your VMware environment, and deployment tools such as SDDC Manager. The second bundle, vSphere Foundation, is targeted to the smaller end of the market and is not available to larger clients or CSPs (like Ensono). This bundle includes only the base level features such as vSphere, vCenter and ESX.
  4. Shifting to commitment-based terms based on core utilization – The new bundles are available in different term lengths: On-demand, 3-year, and 5-year. The commit is based on the number of cores you are consuming, and re-commits at higher core counts are available co-terminus during the length of your contract term. As per usual commit-based models, longer term lengths will result in better discounts over the term.
  5. A re-focus of the partner eco system – The new Broadcom Advantage program has resulted in establishing a tiered structure, with the highest tier partners like Ensono being in the Pinnacle tier (approximately 100 partners fit in this category globally). Moving down from there is the Premier tier, the Select tier, and then finally the Registered tier.

Now that we have straightened out some of the details, how should you be thinking about all of this information? Ensono believes that leveraging the VMware Cloud Foundation stack in an optimal way will provide you with a holistic approach to deploy a software defined data center, and will result in a more agile, modern, and automated way to run your IT estate. Once you have virtualized areas like storage (on vSAN), networking (on NSX) and layered in automation platforms (like Ansible), you will be left with a cloud-like experience that will drive further efficiency in deployments, operations, and cost management.

What about Optimization?

This is probably the most important question you should be asking yourself: “What should I be doing to optimize my costs and improve my experience?”

I can tell you what Ensono did, as we have been working hard over the last 2-3 months on restructuring our portfolio and reducing our private cloud infrastructure by almost 10,000 cores to drive down our overall license costs.

A few options exist, and should all be considered when embarking on an optimization journey:

  1. Servers or hosts with less than 16 cores – the new licensing model applies a minimum of 16 physical cores per CPU. Typical server sizes these days come with 2 CPUs and 16 cores, resulting in 32 cores that require licenses on those servers. If you are currently using servers with less than 16 cores per CPU, you are overpaying on licensing costs, and getting less performance for that cost. You may run into situations where certain workloads like databases require a specific number of cores, potentially less than 16, in these cases you may want to understand if running these database servers as physical server’s vs VMs may give you the best bang for your buck.
  2. Switch your thinking from vRAM to Cores – This is probably obvious by now, but if your mindset before was to just deploy more servers to meet the memory needs of your applications you now need to change your approach. For a typical server that you deploy with 32 cores, you now pay license fees for each core.
  3. Reduce your idle DR sites – The days of having spare capacity sitting idle are gone. Unless you are actively placing your hosts in maintenance mode, you will continue to pay for those cores. At Ensono we have always had a major focus on automation. With these new constraints we have to ensure that automation is used to turn servers up in the event of a DR test or failover and ensure these are placed in maintenance mode again once the test or event has completed.
  4. Get rid of spare hosts you don’t need – Again, this seems obvious. If your previous model was n+3, and you only needed a n+1 model, you should really be looking hard at the need for those extra spare hosts and remove or re-allocate them to other areas to run active workloads.
  5. Changing the server models to increase RAM density – We reviewed our estate and realized that we had an opportunity to deploy servers with faster cores and larger amounts of RAM. Historically, RAM was the limiting factor for VM provisioning, but RAM was also the VMware pricing metric, so you wanted it large, but not too large.  Now, with the shift to cores, more RAM per host means you can deploy more VMs per host, requiring a re-architecture of where VMs are placed. Since memory is no longer the cost driver, it makes sense to use fewer servers, meaning less cores to license, along with increasing the memory capacity on these servers. The workloads that we run for our clients are historically more memory intensive, meaning we could now run more VMs by utilizing this increase in memory capacity, but still pay less for the lower number of cores (and servers) that we needed to run them. Optimizing the servers with faster cores and more RAM allows a higher hosting density and more cost-effective solution as the VMware license cost remains the same.
  6. Making use of vSAN – In the VCF bundle you receive 1 TiB of vSAN allocation per core included in your bundle costs, meaning on a typical server with 2 CPU x 16 cores you have 32 TiB of vSAN that you could utilize before having to pay for additional storage. At Ensono, we found that a large percentage of clients would fit within this 32 TiB allocation based on many of the workloads they have deployed, meaning a switch to vSAN would save a lot of extra costs on unnecessary attached storage arrays.
  7. Balance your vCPU to core allocation – Oversubscription is a feature that virtualization has provided for many years. This basically means that you can oversubscribe the number of virtual CPUs you assign to a single core. For example, in test/dev workloads you may use a model of 8 vCPUs to one core (8:1), in Production workloads where performance is much more paramount, you may drop this to 2 vCPUs to one core (2:1). This matters when you are mixing Test/Dev with Production workloads on a single cluster, as you may be running a 2 vCPU per core model on your cluster, and your test/dev workloads could be fine with 8 to 1. We are actively using this in our client design to ensure workloads are placed in the most optimal clusters to reduce the overall number of cores that are needed to run the VMs.
  8. Evaluate the sprawl of software defined networking – In the previous model advanced NSX features (such as Firewalls, Firewalls with Advanced Threat Protection and Gateway firewalls) could be enabled within a cluster in a granular way. This is no longer an option. In the new design, if you are using these NSX add-ons they are enabled (and licensed) across all the cores in your cluster. At Ensono we leverage NSX widely within our network stack, so we have made significant changes to the architecture to optimize for this new paradigm.

I know that this all sounds a bit overwhelming, but you do not have to go at it alone. Ensono can help, so if you would like to discuss your current situation with our architects, we would love to assess your current environment and offer various options as to how you could greatly improve your usage of the platform and reduce your costs.

Don't miss the latest from Ensono


Keep up with Ensono

Innovation never stops, and we support you at every stage. From infrastructure-as-a-service advances to upcoming webinars, explore our news here.

Start your digital transformation today.