Of course, the CSI is not really an orchestrator, making this integration a bit different from the others. Introduced just over a year ago as a collaboration between several orchestrators and the storage community at large, the CSI specification aims to define a standard interface between orchestrators and storage systems.
Theoretically a single CSI driver like the one we’ve introduced will work with Kubernetes, Mesos, Cloud Foundry, and any other orchestrator that comes along that chooses to make use of it. Each of those orchestrators have varying levels of early support for CSI.
That’s great for storage vendors like us. Over time it should allow us to focus more of our energy on a single interface with broad applicability, rather than understanding the nuances of each orchestrator’s persistence layers. It’s also great for orchestrators that choose to implement it, because they should ultimately have ready support for a large set of storage systems.
Let’s dig just a little bit deeper so that you can better understand why we are doing this now, and what it means for those of you that are already using Trident’s native Kubernetes integration.
Why does Trident require a CSI integration?
The truth is that it doesn’t. Trident’s native orchestrator integrations are superior to the CSI and will continue to be for the foreseeable future. And not just advanced features like cloning, but basic supportability features like the ability to surface up errors are not yet possible through CSI.
This is also the reason why we’re introducing CSI capability in an unsupported “alpha” state.
Then why introduce a CSI capability?
Open source communities like the CSI communicate best through code. Now that we are getting to a place where orchestrators have early implementations of the orchestrator side of the specification, Trident gives us a great platform to experiment with the integration between the orchestrator and plugins so that we can further help evolve the model.
It also allows us to potentially provide basic support for orchestrators that we do not natively support today, like Cloud Foundry.
What is the state of the CSI today?
The CSI is in the very earliest stages of its journey. At the time of this writing, the most recently released version of the specification is 0.3, which specifies a basic snapshot capability. Awesome!
Does that mean that you can use snapshots through the orchestrators now? No. Each orchestrator must now determine for itself if and how it is going to expose snapshots, then ultimately use its own CSI implementation to drive those requests. CSI exposes basic storage functionality, it does not dictate in any way what the orchestrators are going to do with it.
That means that each orchestrator will have varying levels of support for the CSI specification, and the manner through which they expose CSI capabilities will differ from one to another.
Why is NetApp interested in the CSI?
Building and maintaining native integrations with different orchestrators is a real challenge, and that’s why there are so few of us doing it today.
As we help push more storage basics into the CSI itself and maintain them in a generic way, we will be able to spend more time on emergent storage and data management orchestration in the orchestrators.
For example, right now each orchestrator has their own NFS and iSCSI implementation. We’ll be bringing those drivers into the CSI as a library that all the orchestrators will use, and that means we won’t be spending time troubleshooting basic issues due to differences between them like we do today.
What does this mean for future development?
NetApp is committed to its native integrations, and they will continue to be our primary focus. We will continue to expose new and exciting data management capabilities through those integrations and, once they solidify, move the appropriate portions of those models into the CSI.
The promise of CSI is exciting, but it is still in the very early stages. We welcome you to experiment with us and provide feedback about your experience! Please reach out to us on GitHub or on Slack in the #containers channel.