How time flies! The Trident team has had a busy summer cooking up some great content for you. And today we’re happy to present our latest release: 18.10. We bet you can’t wait to find out what’s inside, so let’s get right into it!

Volume Resizing in Kubernetes
As one of Trident’s most requested features, we’re particularly excited to announce this one: user-driven volume resizing is here!

Here are the steps a user will take when they want to resize one of their volumes:

  1. Edit the PersistentVolumeClaim (PVC) for their volume and increase the size.

Actually there are no other steps. Trident handles everything behind-the-scenes after that. The volume will be resized, usually within a few seconds, even if it’s currently in use. If there are any resource limits or quotas in play, in Kubernetes, in Trident, or in the storage platform itself, those are honored too, just like they are when a volume is provisioned initially.

Of course, simplicity like this is deceptively easy. We had to make a series of changes to both Trident and Kubernetes itself to make this work. That means you’ll need Trident 18.10 and Kubernetes 1.12 to allow this to work out of the box.

Note: You can technically enable resizing with earlier versions of Kubernetes, but you will need to disable the PersistentVolumeClaimResize admission plugin to do it. That is an exercise we leave to you.

And for those of you using iSCSI backends, don’t worry, we’re working on it! Block device resizing takes a bit more coordination, and we’ll make the experience just as seamless.

Enabling Volume Resize
You in? Great! To allow your users to resize their volumes, you’ll need the following:

  • Trident 18.10 or later with one or more NFS backends
  • Kubernetes 1.12 or later
  • Trident-managed storage class with allowVolumeExpansion set to true

Only PVCs provisioned from a storage class with allowVolumeExpansion set to true will be trigger volume expansion. For existing storage classes, you can modify this setting at any time with kubectl edit, and volumes that were provisioned prior to changing this setting can still be expanded too.

A new storage class definition with this setting enabled looks like this:

# cat sc_nas.yaml
kind: StorageClass
name: storage-class-nas
allowVolumeExpansion: true
backendType: "ontap-nas"
# kubectl create -f sc_nas.yaml created

And kubectl describe should look like this:

# kubectl describe sc storage-class-nas
Name:                  storage-class-nas
IsDefaultClass:        No
Annotations:           <none>
Parameters:            backendType=ontap-nas
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

To show you how this works, let’s create a PersistentVolumeClaim to provision a PersistentVolume that’s 1GB in size using the StorageClass created in the previous step.

# cat pvc_nas.yaml
kind: PersistentVolumeClaim
apiVersion: v1
name: pvc-nas
annotations: storage-class-nas
- ReadWriteOnce
storage: 1Gi
# kubectl create -f pvc_nas.yaml
persistentvolumeclaim/pvc-nas created

The following output shows that a PersistentVolumeClaim “pvc-nas” has been created and the corresponding PersistentVolume of 1GB has been created:

# kubectl  get pvc
pvc-nas Bound  default-pvc-nas-5c3e2 1Gi  RWO    storage-class-nas   3m

Now attach the PersistentVolumeClaim to a pod such that the PersistentVolume is mounted when the pod is scheduled:

# cat pod-nas.yaml
kind: Pod
apiVersion: v1
name: pod-nas
- name: pvc-nas
claimName: pvc-nas
- name: nas-pv-container
image: nginx
- containerPort: 80
name: "http-server"
- mountPath: "/usr/share/nginx/html"
name: pvc-nas
# kubectl create -f pod-nas.yaml
pod/pod-nas created

The “kubectl get pod” output shows that the pod has been created and is running :

# kubectl get pod
pod-nas   1/1       Running   0          2m

Type “kubectl exec -it  pod-nas bash “ to drop into the pod and check the size of the mount. From the output, we can see that the volume of 1GB has been bind mounted to the container in the pod:

# mount | grep /usr/share/nginx/html on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=
# df -h | grep /usr/share/nginx/html  1.0G  192K  1.0G   1% /usr/share/nginx/html

And now, the moment you’ve been waiting for! To  increase the volume size, issue “kubectl edit pvc pvc-nas”. Update “” from 1GB to 6GB.

# kubectl edit pvc pvc-nas
storage: 1Gi <-------------- Change to 6Gi

Next, use “kubectl get pvc” and “kubectl describe pvc” command to verify the PersistentVolume belonging to the PersistentVolumeClaim “pvc-nas” has been resized.

# kubectl get pvc
pvc-nas Bound   default-pvc-nas-f1a7a 6Gi  RWO  storage-class-nas   47m
# kubectl describe pvc pvc-nas
Name:          pvc-nas
Namespace:     default
StorageClass:  storage-class-nas
Status:        Bound
Volume:        default-pvc-nas-f1a7a
Labels:        <none>
Finalizers:    []
Capacity:      6Gi
Access Modes:  RWO
Type       Status  LastProbeTime                     LastTransitionTime                Reason  Message
----       ------  -----------------                 ------------------                ------  -------
Resizing   True    Mon, 01 Jan 0001 00:00:00 +0000   Thu, 20 Sep 2018 04:45:22 -0400
Type     Reason        Age                From               Message
----     ------        ----               ----               -------
Normal   VolumeResize  14m        Kubernetes frontend successfully resized the volume.

Run “kubectl exec -it  pod-nas bash“ again to drop into the pod and check the size of the mount. From the output we can see that the size of the volume is now 6 Gb, and that change was made live, while the volume was mounted!

# mount | grep /usr/share/nginx/html on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=
# df -h | grep /usr/share/nginx/html  6.0G  256K  6.0G   1% /usr/share/nginx/html


Volume Size and ONTAP Aggregate Usage Limits
We’re introducing two new parameters that give you more control over how Trident manages volume capacity. These limits work in conjunction with those that you may set in Kubernetes or on the storages services themselves.

limitVolumeSize: Trident will refuse to provision or resize a volume if the requested size is over this limit, represented in bytes. This works with every backend type that Trident supports. This limit is also taken into account if a resize is requested.

limitAggregateUsage: Trident will refuse to provision or resize a volume if doing so would move its aggregate beyond this limit, as a percentage of total aggregate physical space. Thin-provisioned volumes and clones are unlikely to hit this limit unless the other volumes that are already in the aggregate have consumed enough space to go over it. This works with every ONTAP backend type that Trident supports.

ONTAP Snapshot Reserve
Trident 18.10 brings with it two changes related to how it deals with ONTAP’s snapshot reserve, which is the amount of space as a percentage of the total volume size that ONTAP reserves for snapshots, a value that ONTAP sets to 5% by default.

The snapshot reserve is now set to 0 if the snapshotPolicy is set to none, the default. After all, there’s no point in reserving space for snapshots if you’re not planning to make any. But if you plan to enable a snapshot policy later, you’ll want to re-consider your snapshot reserve.

We’ve also introduced a snapshotReserve configuration parameter, allowing you to explicitly define how much space ONTAP will reserve for snapshots in the volumes that Trident provisions.

Install Trident without Host Access
The Trident installer solves an interesting chicken and egg problem: how to provision a volume for its own metadata before it’s installed, when it is itself the volume provisioner. It does this by standing up a primordial version of itself that provisions a single volume that Trident can pull in when it starts.

To pull that off, tridentctl has to be able to communicate with two systems: Kubernetes, and the management plane for the storage service you want to use to provision the metadata volume against. Historically the best way to assure that tridentctl could reach both systems was to run it on one of the hosts in your Kubernetes cluster directly.

But you don’t always have direct access to the hosts in your Kubernetes cluster. Sometimes all you have is kubectl on a laptop somewhere, connecting to the Kubernetes API server remotely, and you’ve got network policies that prevent it from reaching the storage control plane APIs. What then?

Prior to 18.10, you would’ve been stuck. No longer! We’ve re-architected some things so that we can now run the entire tridenctl install process with access to kubectl alone. Simply run it from any machine with kubectl configured, and it will do the rest from the pod network. Automatically.

Pretty cool, right?

Updated etcd to v3.3.9
etcd is a core component of Trident and it provides a place to store the metadata vital to its operation. Trident 18.10 updates etcd to version 3.3.9 which fixes many bugs and improves stability. However, be aware that once etcd has been updated, it cannot be reverted to an earlier version.

Upgrade to 18.10!
As always, we encourage you to upgrade so you can take advantage of all these new features, enhancement and improvements. I am pretty sure that the volume resize feature will be a big hit!
NetApp Insight 2018 is here. If you’re in Las Vegas with us, make sure you attend all of the Trident sessions, and stop by the Containers booth and say hello!
If you have any questions, comments, or requests, please reach out to us through our Slack team, or GitHub.

About Jacob Andathethu

A dynamic professional with over 13 years of experience working in Data Storage Industry [NetApp and Dell-EMC]
Currently working as a Technical Marketing Engineer for Open Ecosystem Products in NetApp (Docker,Docker Swarm,Kubernetes, OpenShift).

Pin It on Pinterest