Understanding volume migration on OpenStack: Intercluster volume migration

Welcome to the final part in this three-part series on Cinder volume migration. So far, we have explored the basics of migrating volumes and the migration of a volume between backends that reside on the same cluster. In this post, we extend the concept of moving volumes to backends that are on different clusters. If you haven’t already, do read Part 1 and Part 2 to understand what has been discussed so far with respect to Cinder volume migration.

The system under consideration for this blog post is an OpenStack Queens deployment, with 2 ONTAP clusters serving as backends. This is the cinder.conf file that is used:

There are 3 backends defined:

  • ontapnfs: NFS backend defined on cluster1 using the cinder FlexVol on the openstack SVM.
  • ontap-iscsi: iSCSI backend defined on cluster1, using the openstack_iscsi SVM.
  • c2-nfs: NFS backend defined on cluster2, using the cinder FlexVol on the c2_openstack SVM.

In addition, the following Cinder volume types have also been created:

As you can see, there are 3 volume types created, each of which creates its volumes on a specific backend.

Let us now create a volume of the ontap-nfs volume type, named v1 and of size 1G.

Once the volume has been created and is in the ‘available’ status, let’s take a closer look at v1’s details.

The volume v1 maps to the backend pool associated with the ontap-nfs backend [cluster1@ontap-nfs#192.168.0.131:/cinder_1].

In part two of this series, we have seen the migration of a volume from the cinder_1 FlexVol on cluster1 to the cinder_2 FlexVol on the same cluster. Now, we are migrating a volume to a backend that is on a different cluster. We have already seen that the c2-nfs backend is present on another ONTAP cluster (named cluster2). The objective is to migrate the v1 volume from a backend that is on cluster1 to a backend that is present on cluster2. Since the c2-nfs backend is associated with a different volume type (creatively named c2-nfs), a retyping of the v1 volume must be initiated. Essentially, this instructs Cinder to create a volume of type c2-nfs and copy the data from the source volume.

The volume migration is initiated by issuing the command:

The command is composed of the following arguments:

  • migration-policy: the migration-policy flag can be set to two options: on-demand, if the retyping must initiate a migration across backends, or never, if the retyping should not result in migration.
  • v1: the name of the source volume.
  • c2-nfs: the desired Cinder volume type that the source volume is to be converted into.

The sequence of steps that are triggered by this command look something like this:

  • A new volume is created on the destination backend pool, with the same name ‘v1’, volume type ‘c2-nfs’, and a different UUID.
  • Cinder copies the data from the source volume to the newly created volume.

  • Once the data has been copied, cinder marks the source volume for deletion.
  • After the source volume is deleted, the UUID mappings are updated to ensure the newly created volume retains the source volume’s UUID.

To confirm the migration has been successfully completed, let’s look at v1’s details.

It can be seen from above that the ‘host’ attribute for ‘v1’ now reflects the backend present on cluster2. The ‘migration_status’ indicates that the migration was successful.

It is interesting to observe that the volume does not necessarily have to be in the “available” state before attempting to migrate across clusters. This video shows the migration of a Cinder volume that is attached to a compute instance, with an active operation that is writing data to the Cinder volume throughout the migration process. The volume is moved from an Element cluster to an ONTAP cluster.

NetApp will be at OpenStack Summit Berlin and we would love for you to come talk to us and attend our speaker session! You can always reach us at our Slack Channel and visit netapp.io to stay up-to-date with our offerings.

Bala RameshBabu

Leave a Reply