I recently had a discussion with a NetApp customer about the possibility of attaching the same NetApp Astra Trident created ReadWriteMany PVC to multiple Kubernetes (K8s) clusters. This could be for ongoing access to unstructured data that is shared across pods residing in multiple K8s clusters. It could also be for moving an application between K8s clusters in a gradual fashion, while maintaining access to the unstructured data throughout the application migration.

In any case, this is something NetApp can do, and has done for years with our scale-out NAS solutions! The fact is that not everyone will completely refactor every aspect of an application as they move into containers, or as they move to the public cloud in general. NetApp gives you the flexibility to manage storage both for current cloud-native applications and for applications that have not been fully refactored. In fact, NetApp is a platinum member of the CNCF, so head over to cloud.netapp.com if you are not familiar with what NetApp has brought to the market more recently.

This article shows you the details and caveats of connecting a single Trident-created PVC to two K8s clusters. Please keep all of these details and warnings in mind as you read through the article! There is a bit of risk in this approach if you are not managing it carefully, as compared to the more typical approach of a PVC being used in a single K8s cluster. Here we go!

Requirements

  • Do not use "autoExportPolicy" and "autoExportCIDRs" in the Trident backend definition. Use an "exportPolicy" in the backend definition that covers the worker nodes in both K8s clusters, and manage the NFS clients in this export policy list manually (or use a subnet level export).
  • Set "reclaimPolicy: Retain" in the storage class that will have volumes shared between K8s clusters. This means you have to manually remove K8s PVs and storage volumes when they are ready to be deleted.
  • Use a unique "storagePrefix" in each Trident backend configuration, and map storage classes to it, for all backends using the shared SVM.

Sample Trident backend and K8s storage class definition

# cat backend-ontap-nas.json
{
    "version": 1,
    "storageDriverName": "ontap-nas",
    "backendName": "nasvol",
    "managementLIF": "192.168.0.135",
    "dataLIF": "192.168.0.132",
    "storagePrefix": "SHARED-1",
    "svm": "nfs_svm",
    "username": "vsadmin",
    "password": "Netapp1!",
    "defaults": {
        "exportPolicy": "k8s"
    }
}

# cat storage-class-nasvol.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nasvolsc
provisioner: csi.trident.netapp.io
reclaimPolicy: Retain
parameters:
  backendType: "ontap-nas"
  storagePools: "nasvol:.*"
allowVolumeExpansion: true

 

Notes

  • Use a unique storagePrefix (such as SHARED-2) on the second K8s cluster Trident backend in order to keep track of which K8s cluster is managing each PVC.
  • Use "reclaimPolicy: Retain" in the storage classes on both K8s clusters.

Workflow

Basic setup

OK, now that we have some basics covered, let's actually do this. First, I create a simple Alpine Linux Pod with a PVC as normal on the first cluster, using my storage class with "reclaimPolicy: Retain". We have a Trident managed PVC connected to the Pod, and I put a simple line of data into a file on it. I also show the storage side volume name and junction-path.

 

[root@rhel3 ~]# kubectl apply -f alpine.yaml
[root@rhel3 ~]# kubectl exec alpine-clus1 -- sh -c "df -h /nasvol"
Filesystem                Size      Used Available Use% Mounted on
192.168.0.132:/SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016
                          1.0G    256.0K   1023.8M   0% /nasvol
[root@rhel3 ~]# kubectl exec alpine-clus1 -- sh -c "echo testing123 > /nasvol/file1"
[root@rhel3 ~]# kubectl exec alpine-clus1 -- sh -c "cat /nasvol/file1"
testing123
[root@rhel3 ~]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
alpine-pvc   Bound    pvc-24475fb4-271f-4ad9-92de-6033026d8016   1Gi        RWX            nasvolsc       13s
[root@rhel3 ~]#

cluster1::> volume show *pvc* -fields junction-path
vserver volume                                            junction-path                                    
------- ------------------------------------------------- --------------------------------------------------
nfs_svm SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016 /SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016

cluster1::>

 

Import with --no-manage

If you plan to use the data on the second K8s cluster, but continue to manage it via the first K8s cluster, then import the volume with the --no-manage tridentctl flag. This will leave the storage system volume as is. The PV name on the second K8s cluster will not match the storage volume name or junction path, but the PV on the first K8s cluster will continue to match the storage volume name and junction path. This is what it looks like:

[root@rhel6 ~]# tridentctl import volume nasvol SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016 -f alpine-import.yaml -n trident --no-manage
[root@rhel6 ~]# kubectl apply -f alpine.yaml
[root@rhel6 ~]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
alpine-pvc   Bound    pvc-21de96e7-2600-4252-8275-89575692c52f   1Gi        RWX            nasvolsc       44s
[root@rhel6 ~]# kubectl exec alpine-clus2 -- sh -c "df -h /nasvol"
Filesystem                Size      Used Available Use% Mounted on
192.168.0.132:/SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016
                          1.0G    192.0K   1023.8M   0% /nasvol
[root@rhel6 ~]# kubectl exec alpine-clus2 -- sh -c "cat /nasvol/file1"
testing123
[root@rhel6 ~]#

cluster1::> volume show *pvc* -fields junction-path
vserver volume                                            junction-path                                    
------- ------------------------------------------------- --------------------------------------------------
nfs_svm SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016 /SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016

cluster1::>

 

Import without --no-manage

If you plan to migrate the application to the second K8s cluster, then import the volume without using the --no-manage flag. This will cause the volume on the storage system to be renamed, but the junction path will stay the same. The PV name on the second k8s cluster will now match the storage volume name, but the junction-path still uses the PV name from the first k8s cluster.

 

[root@rhel6 ~]# tridentctl import volume nasvol SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016 -f alpine-import.yaml -n trident
[root@rhel6 ~]# kubectl apply -f alpine.yaml
[root@rhel6 ~]# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
alpine-pvc   Bound    pvc-a15b6d85-a8c2-43aa-a2f7-eb8b3f1191ee   1Gi        RWX            nasvolsc       23s
[root@rhel6 ~]# kubectl exec alpine-clus2 -- sh -c "df -h /nasvol"
Filesystem                Size      Used Available Use% Mounted on
192.168.0.132:/SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016
                          1.0G    256.0K   1023.8M   0% /nasvol
[root@rhel6 ~]# kubectl exec alpine-clus2 -- sh -c "cat /nasvol/file1"
testing123
[root@rhel6 ~]#

cluster1::> volume show *pvc* -fields junction-path
vserver volume                                            junction-path                                    
------- ------------------------------------------------- --------------------------------------------------
nfs_svm SHARED_2_pvc_a15b6d85_a8c2_43aa_a2f7_eb8b3f1191ee /SHARED_1_pvc_24475fb4_271f_4ad9_92de_6033026d8016

cluster1::>

 

Cautions and cleanup

Easy enough, right? One bit of warning though! Deleting a PV managed by Trident, with "tridentctl volume delete" will cause the storage volume to be deleted! Only if the PV is not managed by Trident, can you run the following command to delete a volume from the Trident configuration, but keep the volume intact on the storage system. The exception to this is when you have performed a "tridenctl import" on a second K8s cluster without --no-manage, which causes the storage volume name to change. In this case, a "tridentctl volume delete" run on the first K8s cluster will simply fail silently, since it is still looking for the original storage volume name. In my case, I do just that:

 

[root@rhel3 ~]# tridentctl delete volume -n trident pvc-24475fb4-271f-4ad9-92de-6033026d8016

 

That is it! I hope you enjoyed the walk-through and find it useful.

Limitations

Keep in mind, unmanaged PVCs can't be resized, or have Trident-initiated clones or snapshots created on them. If these operations are required, they must be performed on the K8s cluster that manages the PVC.

Alpine Pod and PVC manifests

Here are the simple Alpine Linux Pod and PVC manifests that I used in my examples above.

 

# cat alpine.yaml
kind: Pod
apiVersion: v1
metadata:
  name: alpine-clus1
  namespace: default
spec:
  volumes:
  - name: alpine-ontap-volume
    persistentVolumeClaim:
      claimName: alpine-pvc
  containers:
  - image: alpine:latest
    command:
      - /bin/sh
      - "-c"
      - "sleep 60m"
    imagePullPolicy: IfNotPresent
    name: alpine
    volumeMounts:
    - mountPath: "/nasvol"
      name: alpine-ontap-volume
  restartPolicy: Always

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: alpine-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nasvolsc

# cat alpine-import.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: alpine-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nasvolsc

 

Pin It on Pinterest