Understanding volume migration on OpenStack: Intracluster volume migration

In part two of this series, we will look at an example scenario, where a volume is migrated between backends that lie on the same cluster. In case you haven’t been through part 1 already, you should definitely do so and obtain a general overview of Cinder volume migration. This post examines the configurations necessary to perform an intracluster migration of Cinder volumes and the procedure that must be followed.

I have used an OpenStack Queens deployment with an ONTAP cluster functioning as the storage backend. Here are my backend stanzas in cinder.conf:

There are 2 backends defined:

  • ontap-nfs   : NFS backend defined on Cluster 1, using the cinder FlexVol on the openstack SVM.
  • ontap-iscsi: iSCSI backend defined on Cluster 1, using the openstack_iscsi SVM.

The list of pools available for volume creation are provided below.

In addition, the following Cinder volume types have also been created:

As you can see, there are 2 volume types created, each of which creates its volumes on a specific backend.

Let us now create a volume of the ontap-nfs volume type, named v1 and of size 1G.

Once the volume has been created and is in the ‘available’ status, let’s take a closer look at v1’s details.

It can be seen that the volume v1 maps to the backend pool associated with the ontap-nfs backend [cluster1@ontap-nfs#192.168.0.131:/cinder_1].

Let us assume that well after the volume was created, the OpenStack admin desires a higher degree of performance for the volume, which can be offered by the disks that serve the cinder_2 SVM. Problem? Not really. This is where cinder migration comes into the picture. The existing ‘v1’ volume can be migrated to the cluster1@ontap-nfs#192.168.0.131:/cinder_2 host.

The volume migration is initiated by issuing the command:

Let’s break this down and understand the arguments that are used:

  • v1: the name of the volume in question
  • cluster1@ontap-nfs#192.168.0.131:/cinder_2: the destination backend pool

The sequence of steps that are triggered by this command look something like this:

  • A new volume is created on the destination backend pool, with the same name ‘v1’, same volume type ‘ontap-nfs’, and a different UUID.
  • Cinder copies the data from the source volume to the newly created volume.

  • Once the data has been copied, cinder marks the source volume for deletion.
  • After the source volume is deleted, the UUID mappings are updated to ensure the newly created volume retains the source volume’s UUID.

To confirm the migration has been successfully completed, let’s take a look at v1’s details.

It can be seen from above that the ‘host’ attribute for ‘v1’ now reflects the cinder_2 backend pool. The ‘migration_status’ indicates that the migration was successful.

But what if the volume is attached? And what if the requirement is to migrate an attached volume from an NFS backend to an iSCSI backend? That is also possible! This video demonstrates the migration of a volume that was created on a NFS backend, attached to a compute instance and then migrated to an iSCSI backend. Since this requires a conversion of the Cinder volume’s type, this migration was initiated with a cinder retype, as seen in the video.

 

Stay tuned for our upcoming post on intercluster migration, where migration across clusters will be discussed in detail.

Visiting OpenStack Summit Berlin? Come talk to us! Learn more about what NetApp has to offer and as always, we would love to hear from you. Follow NetApp’s continuing work in the Open Source ecosystem at netapp.io.

 

Part 1: Understanding Volume Migration on OpenStack

Bala RameshBabu

Leave a Reply