One of the most exciting paradigm shifts coming out of cloud-native platforms like Kubernetes is the idea that users should be able to consume the resources they need when they need them without having to get permission from anyone else. This helps break down a large barrier that has traditionally existed between the consumers of enterprise IT and those operating it.

Dynamic volume provisioners like Trident enable this model, and a very common concern from operations teams looking to employ it is that a user may run out and consume all of the storage they have available, leaving nothing for anyone else. This is especially true in private clouds that do not employ methods like chargeback to encourage users to consume only what is absolutely needed by the business.

In this post we will focus on methods that exist today within Kubernetes to limit storage consumption in order to strike a natural balance that protects the infrastructure from runaway consumption while still unlocking the power of self-provisioning.

Kubernetes Resource Quotas

Resource quotas are not a new concept in Kubernetes. They have existed for a long time, though they are generally associated with CPU and RAM rather than storage. However, we can apply those concepts to storage too!

Storage resource quotas are confined to a particular namespace, also known as projects in OpenShift. You can define them for the entire namespace across all storage classes or for particular storage classes, and use them to limit the number of persistent volumes and/or the capacity of those persistent volumes along those dimensions.

Let’s look at some examples. For reference, this post was written using Kubernetes 1.6.4, but it will work with any relatively recent version of Kubernetes. I’m going to assume that you already have a working Kubernetes deployment with at least one StorageClass defined. I will also assume that you have created a namespace (e.g. kubectl create namespace thepub).

[andrew@k8s ~]# kubectl get sc
NAME      TYPE
value     netapp.io/trident
[andrew@k8s ~]# kubectl describe sc value
Name:           value
IsDefaultClass: No
Annotations:    
Provisioner:    netapp.io/trident
Parameters:     backendType=ontap-nas
Events:

Let’s go ahead and define some capacity-based limits:

# define resource limits
cat < sc-resource-limit.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: sc-resource-limit
spec:
  hard:
    requests.storage: 10Gi
    value.storageclass.storage.k8s.io/requests.storage: 8Gi
EOF

# create the limit
kubectl create -f ./sc-resource-limit.yaml --namespace=thepub

You’ll notice that we have two limits defined:

  • requests.storage – this is the maximum capacity that the namespace can consume across all PVs
  • value.storageclass.storage.k8s.io/requests.storage – this is the maximum capacity which the namespace can consume for the value storage class

We can also mix and match those limits with these, based on a maximum number of PVs:

# define pvc count limit
cat < pvc-count-limit.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: pvc-count-limit
spec:
  hard:
    persistentvolumeclaims: "5"
    value.storageclass.storage.k8s.io/persistentvolumeclaims: "3"
EOF

# create the limit
kubectl create -f ./pvc-count-limit.yaml --namespace=thepub

Here we see similar nomenclature for defining the claim numbers:

  • persistentvolumeclaims – the maximum number of PVCs across all storage the namespace is allowed
  • value.storageclass.storage.k8s.io/persistentvolumeclaims – the maximum number of PVCs for the value storage class

With our quotas defined, let’s see what they look like from the Kubernetes CLI.



Testing the Limits

With everything defined, let’s see what happens when we try to exceed the quotas. First, let’s see what happens when we exceed the PVC count.

  • To begin, create four PVC definitions which look like the following. Be sure to change the name of each in the metadata. Notice that each of these is only 1Gi in size, so we are not going to exceed our capacity allocation using just a few of them.
    # define the PVC, there are four of these in all, the differences between each
    # are only the name
    cat < pvc-value-1.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: value-1
      annotations:
        volume.beta.kubernetes.io/storage-class: value
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    EOF

    Repeat the above, replacing the name, e.g. value-1, with something different each time.

  • Create three PVCs
    kubectl create -f pvc-value-1.yaml --namespace=thepub
    kubectl create -f pvc-value-2.yaml --namespace=thepub
    kubectl create -f pvc-value-3.yaml --namespace=thepub

    At this point we see the volumes have been created and bound.

    And, we can verify that we are still within our quota.

  • Create the fourth PVC, which will exceed the count limit.
    kubectl create -f pvc-value-4.yaml --namespace=thepub

    Kubernetes returns an error stating that we have exceeded the quota. The PVC is never successfully created, so Trident will not even try to provision storage for it. It works exactly as we expect it to!

  • In preparation for the next test, let’s clean up our PVCs.
    kubectl delete -f pvc-value-1.yaml --namespace=thepub
    kubectl delete -f pvc-value-2.yaml --namespace=thepub
    kubectl delete -f pvc-value-3.yaml --namespace=thepub

We want, and expect, the same thing to happen from a capacity perspective, so let’s verify.

  • Just as before, create some PVCs. This time we will need two PVCs, each one 5Gi in size.
    cat < pvc-5Gi-1.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: 5gb-1
      annotations:
        volume.beta.kubernetes.io/storage-class: value
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    EOF

    Repeat changing the name from “5Gb-1” to something different.

  • Create the first PVC and see how it affects the quota.
    kubectl create -f pvc-5Gi-1.yaml --namespace=thepub

    kubectl describe quota sc-resource-limit --namespace=thepub

    We can see that, as expected, the 5Gi value PVC which we created is consuming capacity in the quota at the namespace level (requests.storage) as well as the storage class level.

  • Now let’s try to create the second 5Gi volume, which will exceed the storage class quota.
    kubectl create -f pvc-5Gi-2.yaml --namespace=thepub

    We failed to create the claim as expected, which means that Trident will not try to provision a volume for it. Success in failure!

  • Now let’s try to create a PVC without a storage class, which will bind to either a pre-existing PV or a default storage class of a different type other than value and allow us to consume the rest of the capacity limit that isn’t associated with the value storage class limit.
    # create a pvc with no storage class
    cat < pvc-5Gi-no-sc.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: 5gb-no-sc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    EOF
    
    # create another pvc which doesn't use a storageClass
    kubectl create -f pvc-5Gi-no-sc.yaml --namespace=thepub

    And, just as expected, it succeeded. We can verify our quota status again to check:

It’s important to note that different namespaces can (and will) have different limits, and those limits can be updated at any time.

Control = Confidence

As you can see, when using Kubernetes as your container orchestrator, it’s quite easy to define resource quota policies which limit storage consumption. Together with Trident, you can craft reasonable limits that still empower your users to provision the storage they need when they need it!

If you have questions or are interested in more details, please reach out to us using the comments below or via our Slack channels.

Andrew Sullivan on GithubAndrew Sullivan on Twitter
Andrew Sullivan
Technical Marketing Engineer at NetApp
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.

Pin It on Pinterest