Trident 18.01 beta 1: Introducing volume cloning to Kubernetes!

Hot off the presses just in time for the holidays, we present to you the latest beta version of Trident, our storage orchestrator for Kubernetes-based platforms.

This release introduces several new features, but today we’re going to focus on an exciting one that we’ve been looking forward to bringing to you for a while: self-service volume cloning!

For the uninitiated, a clone is a complete, point-in-time writable copy of another volume. Storage administrators have been taking advantage of them for years, but they rarely make their way into the hands of end users. That’s about to change.

If you’re up to speed on the “state” of Kubernetes storage and the persistent volume framework, you know that cloning is not yet a native feature of the platform. So how can Trident expose it, and do so in a way that’s still natural for end users to consume?

Trident works a bit differently than traditional Kubernetes volume plugins. As a controller, it interacts directly with the Kubernetes API server, and that affords it a much broader understanding of what is going on in the cluster, and what both the user and administrator are trying to achieve.

We use that capability to expose and iterate on new persistence models. Our intent is to take what we learn together and apply that upstream into Kubernetes itself so that everyone can take advantage of it.

In this case, we’ve simply extended the PersistentVolumeClaim object with a custom annotation named trident.netapp.io/cloneFromPVC that you use like this:

Normally Trident provisions an empty volume when a PVC object is created against a storage class that it’s managing. When this additional annotation is specified, it will seed the new volume with all of the data from the source PVC specified.

But what if you have a huge volume? Doesn’t this take a long time and consume a lot of extra space? Actually, no! The underlying platforms have the ability to deliver these “forked” volumes instantly, and without taking up additional space, regardless of the size of the volume. How cool is that?

So now that you have this ability, what will you do with it? Maybe you’ll provision pre-built development workspaces for your team, completely bypassing the initial sync and build process. Or spin up multiple copies of your datasets to enable massively parallel testing without impacting the production workload. There are endless possibilities out there, and we’re excited to see what you come up with.

Try it out! We’d love to hear your feedback. As always, the best way to reach out to us is through the #containers channel on thePub’s Slack team. If you have any issues, do not pass Go and report them directly to GitHub.

Garrett Mueller on EmailGarrett Mueller on Github
Garrett Mueller
Technical Director at NetApp
Software architect with nearly twenty years of development experience across a wide variety of disciplines with a focus on storage. Led a number of DevOps-style initiatives. Founded NetApp's container team.

Leave a Reply