Kubernetes is an open source project for orchestrating deployment, operations, and scaling of containerized applications. It is a Google project which was made available in June 2014 and was accepted to the Cloud Native Computing Foundation in March of 2016. The community surrounding Kubernetes has exploded since June 2014 and it has emerged as one of the leading container deployment solutions.

A common problem when containerizing applications is what to do with data which needs to persist. By default, data written inside of a container is ephemeral and will only exist for the lifetime of the container it’s written in. To solve this problem, Kubernetes offers a PersistentVolume subsystem which abstracts the details of how storage is provided from how it is consumed.

The Kubernetes PersistentVolume API provides several plugins for integrating storage into Kubernetes for containers to consume. In this post, we’ll focus on how to use the NFS plugin with ONTAP. More specifically, we will use a slightly modified version of the NFS example in the Kubernetes source code.

Environment

  • ONTAP – For this post, a single node clustered Data ONTAP 8.3 simulator was used. The setup and commands used are no different than what would be used in a production setup using real hardware.

  • Kubernetes – In this setup, Kubernetes 1.2.2 was used in a single master and single node setup running on VirtualBox using Vagrant. For tutorials on how to run Kubernetes in nearly any configuration and on any platform you can imagine, check out the Kubernetes Getting Started guides.

Clustered Data ONTAP Setup

To setup Data ONTAP, follow these steps. If you have an existing storage system, you may not need to do some or all of these, check with your storage admin if needed.

  1. Create a Storage Virtual Machine (SVM) to host your NFS volumes
  2. Enable NFS for the SVM created
  3. Create a data LIF for Kubernetes to use
  4. Create an export policy to allow the Kubernetes hosts to connect
  5. Create an NFS volume for Kubernetes to use

Of course you can skip some of these steps if you already have what you need there.

Here is an example that follows these steps:

  • Create a Storage Virtual Machine (SVM) to host your NFS volumes
    VSIM::> vserver create -vserver svm_kube_nfs -subtype default
       -rootvolume svm_kube_nfs_root -aggregate aggr1
       -rootvolume-security-style unix -language C.UTF-8
       -snapshot-policy default
    VSIM::> vserver modify -vserver svm_kube_nfs -aggr-list aggr1
    
  • Enable NFS for the SVM created
    VSIM::> vserver nfs create -vserver svm_kube_nfs -v3 disabled
      -v4.0 enabled -mount-rootonly disabled
    
  • Create a data LIF for Kubernetes to use

    The values specified in this example are specific to an ONTAP simulator. Be sure
    to use values which match your environment.

    VSIM::> network interface create -vserver svm_kube_nfs -lif nfs_data
      -role data -data-protocol nfs -home-node VSIM-01 -home-port e0c
      -address 10.0.207.10 -netmask 255.255.255.0
    
  • Create an export policy to allow the Kubernetes hosts to connect

    In this case, we are allowing any host to connect by specifying `0.0.0.0/0` for `clientmatch`. It’s unlikely you’d want to do this in production because it’s very insecure. You will want to use the subnet of your Kubernetes host’s storage network for maximum security.

    VSIM::> protocol export-policy rule create -vserver svm_kube_nfs
      -policyname default -protocol nfs4 -clientmatch 0.0.0.0/0
      -rorule any -rwrule any
    
  • Create an NFS volume for Kubernetes to use
    VSIM::> volume create -volume kube_nfs_0001 -junction-path /kube_nfs_0001
      -vserver svm_kube_nfs -aggregate aggr1 -size 1GB -type RW
      -unix-permissions ---rwxrwxrwx
    

Kubernetes

  • Now that we have an NFS volume available for the containerized applications, we need to let Kubernetes know about it. To do this, we will create a `PersistentVolume` and a `PersistentVolumeClaim`. Below is the `PersistentVolume` definition, save it as `nfs-pv.yaml`.
      # nfs-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: kube_nfs_0001
      spec:
        capacity:
          storage: 1Gi
        accessModes:
          - ReadWriteMany
        nfs:
          server: 10.0.207.10  # set this to your data LIF IP address
          path: "/kube_nfs_0001"  # set this to the junction path of your volume
    
  • To allocate the persistent volume to your application, you will need to create a `PersistentVolumeClaim` that uses the `PersistentVolume`. Save this file as `nfs-pvc.yaml`.
      # nfs-pvc.yaml
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: nfs-claim1
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
    
  • Now that we have a `PersistentVolume` definition and a `PersistentVolumeClaim` definition, we submit them to Kubernetes so it can create them.
      $ kubectl create -f nfs-pv.yaml
      $ kubectl create -f nfs-pvc.yaml
    

Creating an application which uses persistent storage

At this point, we can spin up a container which uses the PersistentVolumeClaim we just created. To show this in action, let’s continue using the NFS example from the Kubernetes source code.

  • First, we’ll setup a simple backend to update an `index.html` file every 5 to 10 seconds with the current time and hostname of the pod doing the update. Using the code below, save the backend as `nfs-busybox-rc.yaml`.
      # nfs-busybox-rc.yaml
      
      # This mounts the nfs volume claim into /mnt and continuously
      # overwrites /mnt/index.html with the time and hostname of the pod.
      apiVersion: v1
      kind: ReplicationController
      metadata:
        name: nfs-busybox
      spec:
        replicas: 2
        selector:
          name: nfs-busybox
        template:
          metadata:
            labels:
              name: nfs-busybox
          spec:
            containers:
            - image: busybox
              command:
                - sh
                - -c
                - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
              imagePullPolicy: IfNotPresent
              name: busybox
              volumeMounts:
                # name must match the volume name below
                - name: nfs-claim1
                  mountPath: "/mnt"
            volumes:
            - name: nfs-claim1
              persistentVolumeClaim:
                claimName: nfs-claim1
    
  • Create the backend service in Kubernetes.
      $ kubectl create -f nfs-busybox-rc.yaml
    
  • Next, we’ll create a web server that also uses the NFS mount to serve the `index.html` file being generated by the backend. The web server consists of a `ReplicationController` definition and a `Service` definition. The `ReplicationController` defines the container(s) which are associated with the service, their configuration, and how many should exist. Save the `ReplicationController` definition as `nfs-web-rc.yaml`.
      # nfs-web-rc.yaml
      
      # This pod mounts the nfs volume claim into /usr/share/nginx/html and
      # serves a simple web page.
      apiVersion: v1
      kind: ReplicationController
      metadata:
        name: nfs-web
      spec:
        replicas: 2
        selector:
          role: web-frontend
        template:
          metadata:
            labels:
              role: web-frontend
          spec:
            containers:
            - name: web
              image: nginx
              ports:
                - name: web
                  containerPort: 80
              volumeMounts:
                  # name must match the volume name below
                  - name: nfs-claim1
                    mountPath: "/usr/share/nginx/html"
            volumes:
            - name: nfs-claim1
              persistentVolumeClaim:
                claimName: nfs-claim1
    

    Save the service definition as `nfs-web-service.yaml`. The service is what enables access to the web server externally.

      # nfs-web-service.yaml
      kind: Service
      apiVersion: v1
      metadata:
        name: nfs-web
      spec:
        ports:
          - port: 80
        selector:
          role: web-frontend
    
  • Create the web server in Kubernetes.
      $ kubectl create -f nfs-web-rc.yaml
      $ kubectl create -f nfs-web-service.yaml
    
  • Now that everything is setup and running, we can verify that it is working as expected. Using the busybox container we launched earlier, we can make a request to `nginx` to check that the data is being served properly.
      $ kubectl get pod -lname=nfs-busybox
      NAME                READY     STATUS    RESTARTS   AGE
      nfs-busybox-1u136   1/1       Running   0          1m
      nfs-busybox-gaqxs   1/1       Running   0          1m
      
      $ kubectl get services nfs-web
      NAME      CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
      nfs-web   10.247.85.128           80/TCP    45s
      
      $ kubectl exec nfs-busybox-1u136 -- wget -qO- http://10.247.85.128
      Tue Apr 12 19:56:18 UTC 2016
      nfs-busybox-gaqxs
    

Summary

The example shows that when we made a request to nginx, the last pod to have updated the index.html file was nfs-busybox-gaqxs at Tue Apr 12 19:56:18 UTC 2016. If we continue to make requests to nginx, we would be able to watch the data get updated every 5-10 seconds.

Using containers and Kubernetes for your application doesn’t change the need for persistent storage. Applications still need to access and process information to be valuable to the business. Using NFS storage is a convenient and easy way to supply capacity for applications using familiar technology.

If you have any questions about Kubernetes, containers, Docker, or NetApp’s integration, please leave a comment or reach out to us at opensource@netapp.com! We would love to hear your thoughts!

Andrew Sullivan on GithubAndrew Sullivan on Twitter
Andrew Sullivan
Technical Marketing Engineer at NetApp
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.

Pin It on Pinterest