Moving demanding enterprise applications to the cloud often exposes you to highly complex challenges. Whether you're dealing with performance and latency limitations or developing your own tools to protect critical data, there are serious issues to struggle with. Do you have the time to close these gaps yourself? Will there be operational costs to pay going forward?

Take SAP HANA in the cloud as an example. Backing up your database can be resource-intensive and difficult to fit inside your backup window. I address this challenge with a script that integrates HANA back-ups with the Azure NetApp Files (ANF) storage service, a service that is both certified for production HANA databases and has built-in, and fast, snapshots.

The recently published sample code coordinates taking a fast snapshot of your database with your application. You can take the Apache 2.0 open source python code and integrate it into your cloud automation. You can use it as written for an SAP HANA deployment or replace the application functions with quiesce and unquiesce calls for your application.

http://github.com/NetApp/ntaphana

Snapshots in ANF don't scale with data size, so they are always near-instantaneous. ntaphana_azure.py allows you to open an SAP HANA database backup, take a snapshot, and then close the backup, adding the backup to the SAP HANA catalog, all with one command:

vm-p01:/mnt/software/script # ./ntaphana_azure.py --hana-backup --backup-name testHanaBackup --verbose
Loading configuration from 'config.json'
Preparing to create snapshot of: P01-data-mnt00001-clone, P01-log-mnt00001, P01-shared
Found volume 'ANF-West-Europe/ANF-WE-01/P01-data-mnt00001-clone'
Found volume 'ANF-West-Europe/ANF-WE-01/P01-log-mnt00001'
Found volume 'ANF-West-Europe/ANF-WE-01/P01-shared'
calling: su - p01adm -c hdbsql -U SYSTEM BACKUP DATA FOR FULL SYSTEM CREATE SNAPSHOT COMMENT "'testHanaBackup'"
0 rows affected (overall time 952.826 msec; server time 951.245 msec)

Created snapshot 'testHanaBackup' in 10.040314 seconds
calling: su - p01adm -c hdbsql -U SYSTEM SELECT BACKUP_ID FROM M_BACKUP_CATALOG WHERE ENTRY_TYPE_NAME = "'data snapshot'" AND STATE_NAME = "'prepared'"
BACKUP_ID
1596001564919
1 row selected (overall time 16.858 msec; server time 228 usec)


calling: su - p01adm -c hdbsql -U SYSTEM BACKUP DATA FOR FULL SYSTEM CLOSE SNAPSHOT BACKUP_ID 1596001564919 SUCCESSFUL "'testHanaBackup'"
0 rows affected (overall time 673.768 msec; server time 671.959 msec)

Once you have a snapshot, you can provision a new volume from it:

vm-p01:/mnt/software/script # ./ntaphana_azure.py --clone --cloud-volume P01-data-mnt00001 --snapshot testHanaBackup --volume-name testHanaClone --verbose
Loading configuration from 'config.json'
Found volume 'ANF-West-Europe/ANF-WE-01/P01-data-mnt00001'
Waiting for clone 'ANF-West-Europe/ANF-WE-01/testHanaClone'
Created clone 'testHanaClone' in 14.531367 seconds

In this example, the data is about 20GB, but because we created a clone, we don't have to wait for a lengthy copy operation.

Other features, such as restore from a snapshot, list snapshots and delete a snapshot, are also included. Get started today!

About Jim Holl

Jim is a silicon valley native and Principal Engineer at NetApp where he strives to turn everything into a public cloud service.

Pin It on Pinterest