Ansible makes the management of NetApp systems easy. Sometimes, though, groups don’t have the time or inclination to create their own playbooks. Ansible has a solution that can ease this issue, it is called roles. An Ansible role is a collection of playbooks, templates, and variables that can be hardcoded or allow pass in by the end user. This series will cover Ansible roles that have been created to make the management of ONTAP easier, starting with cluster configuration.
With the release of Ansible 2.8, changes to some modules allow for optimized role creation. While Ansible 2.8 doesn’t release until the first week of May, all the NetApp modules for 2.8 have been approved and merged with the GitHub site for Ansible. You can read part two of the five part getting started series for instructions on updating from this source <here>.
In this role there exists the ability to do the following:
- Apply license codes
- Assign disk ownership
- Create Aggregates
- Setup SNMP
- Setup DNS
- Setup a MOTD login message
- Setup NTP
- Modify port MTU
- Create Interface Groups
- Modify Broadcast Domains
- Create VLANs
- Create Intercluster LIFS for Snapmirror
NetApp keeps all roles we produce in our GitHub space in the repository https://www.github.com/netapp/ansible. Working with the NetApp roles is easy. To prepare your Ansible system for the roles, you need to be sure you have an /etc/ansible directory.
# mkdir /etc/ansible
Next, using git, download the modules to a content folder in the /etc/ansible directory.
# git clone https://github.com/netapp/ansible.git /etc/ansible/content
This will create a content folder in /etc/ansible and load the roles there. When new roles are released or roles are updated and you want to get the newest versions that is also easy.
# cd /etc/ansible/content # git pull
This will update the roles to current versions.
There is still one last step to make using these roles in playbooks easy. You will create or modify a configuration file for Ansible in /etc/ansible. You will use that file to tell Ansible were these roles are so that you can use their short names in playbooks.
# vi /etc/ansible/ansible.cfg
Add the following two lines to a new file, or just the second line to an existing file under the [default] section. The third line is optional but will clean up your output. In Ansible 2.8, releasing May 16th, changes will be made for how some features work. We are making the edits to match these new features, but until they are all done a warning will be produced. The third line silences that warning.
[defaults] roles_path = /etc/ansible/content/ deprecation_warnings = False
Now that you are finished with the setup, using this role to configure an ONTAP cluster is easy. The first thing you will need is a playbook to call this role.
--- - hosts: localhost gather_facts: no vars: input: &input hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" file: globals.yml vars_files: - "{{ file }}" tasks: - name: Get Ontapi version na_ontap_gather_facts: state: info <<: *input https: true ontapi: 32 validate_certs: false - import_role: name: na_ontap_cluster_config vars: <<: *input
So, that playbook will either read in a file in the same directory called gloabls.yml or you can specify an extra-vars for file to specify on a per run basis which file to use. The playbook also uses the gather_facts module to look up the version of ONTAP so that all the tasks run in that version of the API calls for the most efficiency. The tasks in this role require the information from that gather_facts module so it cannot be excluded.
All the information for what configuration settings you want to use are read from the passed in file. Here is the actual file I use to set up my private lab, with some minor redactions:
cluster: lab-sim netapp_hostname: 172.x.x.x netapp_username: admin netapp_password: netapp123 license_codes: <removed> disks: - lab-sim-01 - lab-sim-02 motd: "Only authorized personal may use this system." dns: - { dns_domains: lab.local, dns_nameservers: 1.1.1.1 } ntp: - { server_name: time.nist.gov, version: auto } snmp: - { community_name: public, access_control: ro } aggrs: - { name: aggr1, node: lab-sim-01, disk_count: 26, max_raid: 26 } - { name: aggr2, node: lab-sim-02, disk_count: 26, max_raid: 26 } ports: - { node: sim-01, port: e0e, mtu: 9000 } - { node: sim-01, port: e0f, mtu: 9000 } - { node: sim-02, port: e0e, mtu: 9000 } - { node: sim-02, port: e0f, mtu: 9000 } inters: - { name: intercluster_1, address: 172.x.x.x, netmask: 255.255.255.0, node: lab-sim-01, port: e0c } - { name: intercluster_2, address: 172.x.x.x, netmask: 255.255.255.0, node: lab-sim-02, port: e0c }
Here though is an example that shows everything that can be set and the variables to supply:
license_codes: AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA, AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAA,AAAAAAAAAAAAAA disks: # at current the disks module assigns all visiable disks to a node. If you are wanting to split disks, currently that has to be done manually - cluster-01 - cluster-02
motd: "The login in message you would like displayed when someone ssh's into the system" dns: # Set DNS for Cluster - { dns_domains: ansible.local, dns_nameservers: 1.1.1.1 }
ntp: # Set NTP Server (requires the cluster to have DNS set) - { server_name: time.nist.gov, version: auto }
snmp: # Setup or modify an SNMP community - { community_name: public, access_control: ro }
aggrs: # Create one or more Aggregates - { name: aggr1, node: cluster-01, disk_count: 26, max_raid: 26 } - { name: aggr2, node: cluster-02, disk_count: 26, max_raid: 26 }
ports: #Set MTU for ports. * Ports also has variables 'autonegotiate', and 'flowcontrol' which default to true, and none but can be overridden by your playbook - { node: cluster-01, port: e0c, mtu: 9000 }
ifgrps: # Create and add ports to Interface Groups. - { name: a0a, node: cluster-01, port: "e0a", mode: multimode } - { name: a0a, node: cluster-02, port: "e0a", mode: multimode } - { name: a0a, node: cluster-01, port: "e0b", mode: multimode } - { name: a0a, node: cluster-02, port: "e0b", mode: multimode }
vlans: # Create vlans on LIFS - { id: 201, node: cluster-01, parent: a0a }
bcasts: # Create Broadcast domains - { name: Backup, mtu: 9000, ipspace: default, ports: 'cluster-01:e0c,vsim-02:e0c' }
inters: # Create intercluster lifs for SnapMirror. - { name: intercluster_1, address: 172.32.0.187, netmask: 255.255.255.0, node: cluster-01, port: e0c } - { name: intercluster_2, address: 172.32.0.188, netmask: 255.255.255.0, node: cluster-02, port: e0c }
You can have a single file to read in per cluster, or you could have one file that has the defaults, like MOTD, DNS, NTP, SNMP and read it and the file for a cluster simply by specifying both variable files in the vars_files section.
Roles for create Vservers, NAS and SAN volumes (shares, exports, LUNs) and a role for creating a SnapMirror relationships including verifying peer settings are coming so check back soon.
As always, if you aren’t already on our Slack Workspace you should join at https://netapp.io/slack. Join me and other developers for Ansible in the #configurationmgmt channel. Also be sure to check out the other sections on www.netapp.io for information on all our Open Ecosystem projects including OpenStack and Containers.