Fully automated Day-0 setup is a goal of the “next-gen” datacenter. This is 99.9% possible with ONTAP and Ansible. The only console work that needs to be done is to assign a password to the ‘admin’ account and to make note of the DHCP address assigned to the nodes, as well as the cluster IP addresses that nodes 2-x have generated. Let me show you what I mean.
I am using:
ONTAP 9.7
Ansible 2.9.7
NetApp ONTAP collection (netapp.ontap) 20.4.1
Here on Node1 I take note of the DHCP address that it has. In this case “172.32.0.128”. I then set the admin password. To do this login in as admin. No password is needed at this point. Then you will do these commands and responses
::> security login password -username admin
Enter your current password: <here just leave blank and hit enter>
Enter a new password: <enter in the password you want to use for admin>
Enter it again: <re-enter the same password>
That’s it for Node1. For Nodes 2-X there is a single extra step.
Using the ‘net int show’ command take note of one of the cluster IP addresses that the node has generated. In this case 169.254.31.132.
That is all that has to be done physically at any system. The rest can all be done via Ansible. Here is a playbook that will:
- Create the cluster, Join a second node
- Create the cluster management LIF
- Add SSH to ‘admin’ user rights
- Remove the DHCP added management LIFs.
---
- hosts: localhost
gather_facts: false
collections:
- netapp.ontap
vars_files: "{{ file }}"
vars:
login: &login
username: "{{ username }}"
password: "{{ password }}"
https: true
validate_certs: false
name: "Build cluster: {{ cluster }}"
tasks:
- name: create cluster
na_ontap_cluster:
state: present
cluster_name: "{{ cluster }}"
hostname: "{{ node1.dhcp_ip }}"
<<: *login
- name: "Join Node to {{ cluster }}"
na_ontap_cluster:
state: present
cluster_ip_address: 169.254.31.132
#cluster_name: "{{ cluster }}"
hostname: "{{ node1.dhcp_ip }}"
<<: *login
- name: Create cluster mgmt lif
na_ontap_interface:
state: present
interface_name: "{{ cluster }}_mgmt"
vserver: "{{ cluster }}"
address: "{{ cluster_mgmt }}"
netmask: 255.255.255.0
role: cluster-mgmt
home_node: "{{ cluster }}-01"
home_port: e0c
hostname: "{{ node1.dhcp_ip }}"
<<: *login
- name: Create User
na_ontap_user:
state: present
name: admin
applications: ssh,console,http,ontapi,service-processor
authentication_method: password
role_name: admin
vserver: "{{ cluster }}"
hostname: "{{ cluster_mgmt }}"
<<: *login
- name: remove auto mgmt lif
na_ontap_interface:
state: absent
interface_name: "{{ cluster }}-01_mgmt_auto"
vserver: "{{ cluster }}"
hostname: "{{ cluster_mgmt }}"
<<: *login
The variables that are being used here are in a file named after the cluster for use with Infrastructure as Code practices.
ansible_lab.yml
username: admin
password: netapp123
cluster: ansible_lab
cluster_mgmt: 172.32.0.151
subnet: 255.255.255.0
gateway: 172.32.0.1
node1:
dhcp_ip: 172.32.0.128
node2:
dhcp_ip: 172.32.0.129
cluster_intra: 169.254.31.132
Since I defined a variable called ‘file’ in my playbook I can run this using my configuration with the following command.
ansible-playbook cluster_build.yml -e file=ansible_lab.yml
At this point I can use the other NetApp ONTAP modules to add my licenses, assign disk ownership, create aggregates and all the other steps I need to bring my cluster fully up for user presentations and it will only take a couple minutes.
While this example shows only a two-node system, with minor modification to the “node2” section to make it a nodes section and further build out its dictionary/array setup, and slight changes to the loop calling node2 items, a cluster with as many nodes as necessary can be built.
I hope this really accelerates your Day 0 process and shows how easy it is to get ONTAP up and working in your environment.
If you have any questions or comments on this or any other thing NetApp is doing with Ansible you can join us on our Slack workspace in the #configurationmgmt channel. If you aren’t on our Slack yet get an invite at netapp.io/slack.
Thanks for this post. I’ve got this working in a lab with a FAS system. Hopefully testing it out soon for customers.
I really enjoy posts like this. Please keep them coming!
I used this in my lab and it was working, all the way up to ONTAP 9.7P5. However, I noticed a change with ONTAP 9.7P6. In 9.7P5 the following node_mgmt LIFs were created when the cluster is created and the second node is joined:
fas-cluster1::> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
———– ———- ———- —————— ————- ——- —-
fas-cluster1
cluster_mgmt up/up 10.128.58.113/24 fas-cluster1-01
e0M true
fas-cluster1-01_mgmt1
up/up 10.128.58.114/24 fas-cluster1-01
e0M true
fas-cluster1-02_mgmt1
up/up 10.128.58.115/24 fas-cluster1-02
e0M true
However, in ONTAP 9.7P6 the following LIFs are created:
fas-cluster1::> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
———– ———- ———- —————— ————- ——- —-
fas-cluster1
cluster_mgmt up/up 10.128.58.113/24 fas-cluster1-01
e0M true
fas-cluster1-01_mgmt1
up/up 10.128.58.114/24 fas-cluster1-01
e0M true
fas-cluster1-01_mgmt_auto
up/up 169.254.93.144/16 fas-cluster1-01
e0M true
fas-cluster1-02_mgmt_auto
up/up 10.128.58.115/24 fas-cluster1-02
e0M true
The “_auto” is added to the real node management LIF of the second node, so deleting the auto LIFs also removes the node management LIF that we want to keep. So frustrating.