NetApp Persistent Storage in Kubernetes: Using ONTAP and NFS

Kubernetes is an open source project for orchestrating deployment, operations, and scaling of containerized applications. It is a Google project which was made available in June 2014 and was accepted to the Cloud Native Computing Foundation in March of 2016. The community surrounding Kubernetes has exploded since June 2014 and it has emerged as one of the leading container deployment solutions.

A common problem when containerizing applications is what to do with data which needs to persist. By default, data written inside of a container is ephemeral and will only exist for the lifetime of the container it’s written in. To solve this problem, Kubernetes offers a PersistentVolume subsystem which abstracts the details of how storage is provided from how it is consumed.

The Kubernetes PersistentVolume API provides several plugins for integrating storage into Kubernetes for containers to consume. In this post, we’ll focus on how to use the NFS plugin with ONTAP. More specifically, we will use a slightly modified version of the NFS example in the Kubernetes source code.


  • ONTAP – For this post, a single node clustered Data ONTAP 8.3 simulator was used. The setup and commands used are no different than what would be used in a production setup using real hardware.

  • Kubernetes – In this setup, Kubernetes 1.2.2 was used in a single master and single node setup running on VirtualBox using Vagrant. For tutorials on how to run Kubernetes in nearly any configuration and on any platform you can imagine, check out the Kubernetes Getting Started guides.

Clustered Data ONTAP Setup

To setup Data ONTAP, follow these steps. If you have an existing storage system, you may not need to do some or all of these, check with your storage admin if needed.

  1. Create a Storage Virtual Machine (SVM) to host your NFS volumes
  2. Enable NFS for the SVM created
  3. Create a data LIF for Kubernetes to use
  4. Create an export policy to allow the Kubernetes hosts to connect
  5. Create an NFS volume for Kubernetes to use

Of course you can skip some of these steps if you already have what you need there.

Here is an example that follows these steps:

  • Create a Storage Virtual Machine (SVM) to host your NFS volumes

  • Enable NFS for the SVM created

  • Create a data LIF for Kubernetes to use

    The values specified in this example are specific to an ONTAP simulator. Be sure
    to use values which match your environment.

  • Create an export policy to allow the Kubernetes hosts to connect

    In this case, we are allowing any host to connect by specifying for clientmatch. It’s unlikely you’d want to do this in production because it’s very insecure. You will want to use the subnet of your Kubernetes host’s storage network for maximum security.

  • Create an NFS volume for Kubernetes to use


  • Now that we have an NFS volume available for the containerized applications, we need to let Kubernetes know about it. To do this, we will create a PersistentVolume and a PersistentVolumeClaim. Below is the PersistentVolume definition, save it as nfs-pv.yaml.

  • To allocate the persistent volume to your application, you will need to create a PersistentVolumeClaim that uses the PersistentVolume. Save this file as nfs-pvc.yaml.

  • Now that we have a PersistentVolume definition and a PersistentVolumeClaim definition, we submit them to Kubernetes so it can create them.

Creating an application which uses persistent storage

At this point, we can spin up a container which uses the PersistentVolumeClaim we just created. To show this in action, let’s continue using the NFS example from the Kubernetes source code.

  • First, we’ll setup a simple backend to update an index.html file every 5 to 10 seconds with the current time and hostname of the pod doing the update. Using the code below, save the backend as nfs-busybox-rc.yaml.

  • Create the backend service in Kubernetes.

  • Next, we’ll create a web server that also uses the NFS mount to serve the index.html file being generated by the backend. The web server consists of a ReplicationController definition and a Service definition. The ReplicationController defines the container(s) which are associated with the service, their configuration, and how many should exist. Save the ReplicationController definition as nfs-web-rc.yaml.

    Save the service definition as nfs-web-service.yaml. The service is what enables access to the web server externally.

  • Create the web server in Kubernetes.

  • Now that everything is setup and running, we can verify that it is working as expected. Using the busybox container we launched earlier, we can make a request to nginx to check that the data is being served properly.


The example shows that when we made a request to nginx, the last pod to have updated the index.html file was nfs-busybox-gaqxs at Tue Apr 12 19:56:18 UTC 2016. If we continue to make requests to nginx, we would be able to watch the data get updated every 5-10 seconds.

Using containers and Kubernetes for your application doesn’t change the need for persistent storage. Applications still need to access and process information to be valuable to the business. Using NFS storage is a convenient and easy way to supply capacity for applications using familiar technology.

If you have any questions about Kubernetes, containers, Docker, or NetApp’s integration, please leave a comment or reach out to us at! We would love to hear your thoughts!

Andrew Sullivan on GithubAndrew Sullivan on Twitter
Andrew Sullivan
Technical Marketing Engineer at NetApp
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.

Leave a Reply