Using the NetApp Docker Volume Plugin with Docker Compose

Docker Compose allows users to define an application which consists of multiple services, each of which run as an independent container, by using a single definition file. The file describes any inter-dependencies the services have, as well as details like their network and storage requirements, in a straightforward manner that’s easy to maintain. Taking it a step further, the Compose definition is also the basis for applications deployed to Swarm clusters.

Compose includes the ability to attach volumes to any service that has persistent storage requirements. With Compose version 2, we can create or re-use existing Docker volumes. Working in conjunction with the NetApp Docker Volume Plugin (nDVP), we can back those volumes with NetApp storage.

Composing Applications on a Single Docker Host

Let’s start with the Compose definition from the example WordPress quickstart:

Notice that we create two services, a database and the WordPress installation. The database is using a local volume, in this case a sub-directory off of the working directory where the application is started.

Start the WordPress application by issuing the docker-compose up command from the same directory as your docker-compose.yml file. You should see similar output to below:

ndvp_compose_1

When the container is started Docker will create the directory if it doesn’t exist and map it into the database container.

At this point you can browse to port 8080 (which is in our docker-compose.yml file) on the Docker host to see the WordPress setup screen. If you like, go through the WordPress configuration to see that it is a working deployment. Any changes you make will persist to disk on the Docker host in the directory that you pulled into the container.

Now think about that storage for a minute. While you can stop and restart the container from the same directory on the same host and expect to be able to pull in that storage again, if anything were to happen to that host the data would be lost.

That may be acceptable in some cases, but for the most part you want to rely on the availability of that data from any host, even during development and testing. And that means reliable external storage.

Adding Persistent Storage From NetApp

At this point you will need to have the nDVP configured on your Docker host. Getting setup only takes a couple of minutes, you can find instructions on GitHub. Once the plugin has been configured we can begin to create and consume NetApp volumes. In this example we will be using the ONTAP driver with NFS, however the E-Series and SolidFire drivers work equally well.

To create and use the volume we will need to modify the Compose file. We need to modify the volumes which the database container is using and we need to add a volume specification to the end.

Now, when we start the application, Docker Compose will create the volume using the driver named netapp and attach it to the designated location. Notice that the volume name is prefixed by the project name, which is determined by specifying the -p projectName parameter to docker-compose up or, if no value is provided, Compose will use the folder name.

ndvp_compose_2

We can inspect the volume details from both the Docker and ONTAP perspective too:

ndvp_compose_3

ndvp_compose_4

Inspecting the volume from the Docker host shows that it has been created and mounted under the /var/lib/docker-volumes directory. We can view the volume from the NetApp using a ClusterShell command via SSH. Remember that the volume on the storage array will have a prefix as well (with a default of netappdvp_).

Using an external, enterprise class, storage array for the application data is important for a number of reasons, not the least of which is data protection. Storage arrays are designed for high availability and to assure the integrity of the data being stored. In the first instantiation of our WordPress application we achieved persistence, but we didn’t have high availability and protection until moving to a storage array.

Using Enterprise Storage for Test and Development

Not only do we have enterprise class reliability and data protection now, we also have powerful new capabilities at our disposal. We can easily move our application from host to host and each time it will reconnect to the same data volume. Or, we could clone our volume and use it for other purposes. A clone is a lightning fast copy of an existing volume that you can write to.

Each of the storage platforms available through the nDVP supports the cloning of volumes. This example is using the ONTAP driver, so let’s clone our volume and introduce the new volume to Docker.

By using the same volume name (without the prefix), the nDVP will simply discover the existing volume rather than creating a new one when the docker-compose up command is executed. Now that we’ve created a clone of the data, we can take advantage of it like any other volume. We need to update the docker-compose.yml to use the new _dev name for the volume, after that we can re-instantiate the application.

When we start the application this time, it starts up just like it were the original instance. After all, it’s the exact same data.

Your Data – Where You Want, When You Want

The combination of enterprise storage and Docker is extremely powerful. Enabling access to your data from anywhere means that you can develop, test, deploy, and manage your application across any Docker platform available. The nDVP makes connecting your containers to data simple and easy.

If you’re interested in more information about the NetApp Docker Volume Plugin, it is available from our GitHub site. Full details on how to deploy and configure the plugin are available in the documentation. If you have any questions or suggestions, please don’t hesitate to reach out to me using the comments below, the NetApp Communities (my username is asulliva), or using email (my communities username at netapp.com).

Andrew Sullivan on GithubAndrew Sullivan on Twitter
Andrew Sullivan
Technical Marketing Engineer at NetApp
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.

Leave a Reply