Simplifying Trident Management with tridentctl

Trident 17.07 introduced a simplified management tool for interacting with Trident: tridentctl. This command line tool eliminates the previous method of using shell scripts inside the Trident containers and instead introduces a paradigm which mimics Kubernetes own kubectl. Let’s look at how to use tridentctl to interact with, and get information from, Trident.

Managing the Backend

Adding a backend still needs a configuration file, however introducing the configuration to Trident has been simplified to just a single command:

Once the backend has been created, it’s now super easy to see the attributes Trident discovered about that storage. By changing the output of tridentctl get backend from the standard text table based to JSON, or XML, we can get details of the backend capabilities.

Let’s break this output up a little and inspect the various pieces. First, since this is an ONTAP backend, you can see each aggregate which has been assigned to the SVM broken out with individual attributes. For example, in the above output we see that the aggregate named thePub_aggr1 has these capabilities:

  • Media = hybrid
  • Allocation (a.k.a. provisioning type) = thin or thick
  • Snapshots = true

If there have been any storage classes introduced to Kubernetes, we can also see which, if any, of them this aggregate will meet. In the example, it meets the requirements of both “performance” and “value” storage classes. Finally, we can also see any existing, Trident managed, volumes which are on the aggregate.

Similarly, if we look at a SolidFire backend we can get details about any QoS profiles which were introduced as a part of the definition.

If desired, we can delete the backend using tridentctl as well using a simple command:

Viewing Which Backends Satisfy a Storage Class

Trident queries Kubernetes for information about storage classes, which it then uses to determine which storage backends should be used to meet the requirements. Using tridentctl we can easily see which storage backend, and detected capabilities (e.g. a QoS profile or aggregate), will be used for a particular storage class.

In this instance, we can see that the “performance” storage class will be provisioned using one of:

  • Aggregate thePub_aggr1
  • Aggregate thePub_aggr2
  • SolidFire with a Silver QoS policy. If we wanted to see what “Silver” represents, we could use the command “tridentctl get backend solidfire_10.0.2.40 -o json“, as shown above.

Persistent Volume Details

We can see the details of provisioned persistent volumes using the tridentctl get volume subset of commands. This includes, among other things, which backend is hosting the volume, the export path for NFS volumes, the storage class used, and more.

Getting Logs for Trident

Trident’s logs are, generally speaking, pretty easy to find using kubectl. For example, we first need to identify what the pod name for Trident is before using the kubectl logs command:

Not exactly hard to figure out, but a minor extra step which tridentctl can eliminate for us. This can now be reduced to simply tridentctl logs, which will automatically determine the pod and return the logs:

tridentctl Simplifies Trident Management

Retrieving data about the backend, storage classes, and volumes in your environment is made easier using tridentctl. It delivers more information about the individual components in an easily digestable format. If you haven’t yet had a chance to update your Trident deployment to 17.07, it’s a very simple experience using the included administration helper script, update_trident.sh. Trident 17.07 brings a number of improvements, enhancements, and bug fixes to Trident, you can see the full details on the release page.

If you have any questions, please leave us a comment below or reach out using our Slack channels!

Andrew Sullivan on GithubAndrew Sullivan on Twitter
Andrew Sullivan
Technical Marketing Engineer at NetApp
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows.

Leave a Reply