Automate the configuration of ONTAP systems for OpenStack with Puppet

Puppet is a configuration management utility that operates in a Server -> Agent method.  Desired configuration states are defined in manifest files, and when agents or devices are updated, any changes in these manifests are pushed to the end nodes.  NetApp has published Puppet modules for ONTAP, SolidFire, and E-Series with PuppetLabs.  The git hub link also has the instructions for adding the NetApp SDK Ruby components.  Once that is done puppet module install puppetlabs-netapp will add the modules to puppet.  Today we will be focusing on ONTAP, and what manifest information and modules will allow you to configure everything needed to use ONTAP as a backend for OpenStack Cinder, and Manila.

While I am including the module definitions for multiple setup options, I will focus on using a NFS backend for Cinder, Glance, Nova and not using storage server management with Manila.  These manifests will create the SVM, the LIF for NFS/SMB, the role and user for access, the export policy and rules, and finally the volumes for Cinder, Nova, and Manila. If you are going to use iSCSI, or FCP, or Cheesecake, or Manila with storage server management, uncomment the relevant sections and comment out the sections you don’t use.

I am putting all of this into a single manifest I am calling openstack.pp, however you may want to split it into two manifests based on your setup, one for your cluster level configurations and one for your SVM level configurations.  In this manifest my cluster is called vsim and my SVM is called openstack_svm

Full manifest

Now let’s break down the individual modules to explain what is happening here.  First is the netapp_vserver module:

netapp_vserver

This module creates a SVM called openstack_svm.  The ensure is set to present so we know it will be created or verified that it is there.  We are allowing the protocols NFS, CIFS, and iSCSI.  We are naming the root volume of the SVM openstack_svm_root.  We are using the language c.UTF-8 for the SVM, with a security style of unix.  Finally, we are creating the root volume on aggregate aggr1_1 and adding aggr1_1 to the list of aggregates that this SVM can create volumes in.

The next module is the netapp_lif module.

netapp_lif

Here we are creating a new LIF called openstack_svm_lif.  Again, we want this LIF to exist so the ensure is set to present.  This LIF is for SVM openstack_svm and it will be serving a data role.  We want to make sure the LIF is in an up state and that it allows nfs, and cifs protocols.  Finally, we are giving it an address of 172.32.0.183 on node vsim-01 port e0c with a netmask of 255.255.255.0.

There are a lot of netapp_security_login_role entries, but we will just look at one.

netapp_security_login_role

The first part is more than just the name, it’s the data directory permission, the role name, and the SVM it’s being assigned to.  Since most of the Cinder and Manila role permissions must be at the Cluster level, I just put them all there.  In my example, vserver:cinder_cli:vsim creates or adds to a role called cinder_cli giving vserver data directory permissions to the vsim SVM, which in my case is the cluster level SVM.  Again, here the ensure is set to present, and the access level is marked as readonly.  If the access level is to be all, you can leave out the access_level line as all is the default.

Finally, at the cluster level, we create logins for these roles with the netapp_security_login module.

netapp_security_login

Again, the first line is more than just a module and name.  This is the login type, authentication type, login name, and SVM to apply to.  So, this is a ONTAPI login that is authenticated by a password, with a username of cinder_cli on the SVM vsim.  I assign this login the role of cinder_cli and give it a password of MyP@$$w0rd.  The ‘\’ is there because two $$ next to each other is a special character and I had to escape the second.

That is the end of the cluster level settings and now we move on to creating objects in the openstack_svm SVM starting with enabling NFS on the SVM.

netapp_nfs

This is straightforward, we are making sure that NFS is enable and present on the SVM openstack_svm.

Now we create an export policy using netapp_export_policy.

netapp_export_policy

This is a very simple one, a policy called exp_openstack is created or verified to be there because the ensure is set to present.  Since we now have an export-policy we add a rule to it with netapp_export_rule.

netapp_export_rule

We start with the name of the export-policy we are adding this rule to and a rule ID number.  This rule needs to exist so ensure is set to present.  The client or client range we want this rule to apply to is next.  If you are specifying specific IPs as in this example, you will need to run the module for each IP and increment the ID number each time.  Finally, we set the Rules for R/O, R/W, and Root access.

Now that we have an export-policy that works we can create our volumes using netapp_volume.

netapp_volume

This will create a volume called cinder_vol on aggregate aggr1_1 with a size of 100 Gigs.  The volume will be online and use the export-policy exp_openstack.  The space reserve will be set to none, and a snapshot reservation of 0 will also be set.

I hope these examples help you to faster deploy for an OpenStack environment on your ONTAP systems.  Find links and docs to all our configuration management information at thePub and be sure to join us on Slack to talk with me and the other barkeeps about configuration management and a vast array of other topics.

David Blackwell on Linkedin
David Blackwell
Technical Marketing Engineer for OpenStack at NetApp
David is a twenty year IT veteran who has been an admin for just about every aspect of a DataCenter at one time or another. Currently he is the Technical Marketing Engineer for OpenStack at NetApp. When not working, or tinkering with new software at home, David spends most of his free time on his hobby of 3D printing.

Leave a Reply