Many enterprise organisations with SAP applications are planning to or are in the process of, migrating these onto public cloud – and they have plenty of reasons to do so. On a purely functional level, support for SAP ERP or SAP ECC software that runs on common relational databases like DB2, SQL, and Oracle, is due to end in 2030. At that time, all companies running SAP ERP software will be forced to migrate to SAP S/4HANA, the newest SAP ERP Suite.
Businesses are also motivated by the strategic and business benefits they can expect to gain from public cloud. Cloud offers flexibility, scalability and productivity, as well as the ability to minimise operating costs, innovate rapidly, and increase agility.
Microsoft Azure, providing platform capabilities for a wide range of SAP applications, has proven to be a particularly popular option. But once organisations make the move, at a technical level, what can they expect to be different?
For Highly Available (HA) installations it may be helpful to think about the changes in terms of Distributed Netweaver’s four key components:
Let’s explore each, individually, below:
On Azure, the clustering setup is in some ways simpler, involving fewer components. However, the main difference compared to on-premise, is that, on Azure, you can’t use all the cluster functionality. The failover functionality doesn’t automatically take over the IP address, so you’ll need a standard (required for a deployment over two Availability Zones) or a basic load balancer (Availability Set deployment) which points to the active cluster server.
Message and enqueue services are broadly similar.
On-premise businesses will be familiar with normal shares in clusters that move between nodes when they failover. However, in Azure, it’s currently not possible (although this may change in a few months) to share a disk between two servers; you either have to physically remove your share to somewhere else, or you have to use local disks that are kept in sync by data replication, so that changes are made to nodes simultaneously.
On Windows, an application like SIOS Data keeper or Scale-Out File Server (SOFS) can achieve this with real-time block level replication that mirrors disks between servers. In a Linux environment, an NFS cluster can perform a similar function to support high-availability services.
Different options have become available more recently, such as the Azure NetApp File System. This can be used as a shared disk subsystem in various scenarios, using a URL which points to the root of the Azure NetApp Files namespace. Though easy to set up, it tends to be expensive for these purposes, since you have to use at least 4TB. Another option, in time, could be Azure Files (in private preview) or Azure Shared Disk – though this hasn’t been released for SAP as yet.
On premise, it’s possible to have one shared disk which would be clustered and would failover between the two nodes. But on Azure, as mentioned earlier, it’s not possible to share disks between two nodes. This means you have to use database native replication technologies to achieve automatic replication of data. In Azure, this will involve setting up two instances of your databases – one which will need to be defined as your primary to feed your SAP system, and the other, connected synchronously, will act as the secondary. In the event of a failure on your primary database, this would failover to the secondary database.
The appropriate database native replication tool will depend on what database type you’re using. Common examples include:
HANA System Replication (HSR)
Oracle Data Guard
MS- SQL Always On Availability Groups
Sybase Replication Server
To use an on-premises server most efficiently, businesses preference high utilisation of each system. On Azure, the optimal approach tends to involve the exact opposite: scaling out. The more application servers a business has, the less the impact of one server failing. Ultimately, scaling out results in a more resilient infrastructure.
Azure’s linear pricing model means that you won’t be punished for taking this approach. Financially, there’s simply no incentive to put too many workloads onto one VM.