There is a lot of buzz around DevOps these days often leading to confusion.  I hear people asking, “What is DevOps?”, “How do I do DevOps?”, “What do I need for DevOps?”.  Everyone needs to go faster these days.  AWS puts out over 10,000 changes a DAY!  A DevOps mentality and process are how this is accomplished.  It requires as much documentation, and automation as possible. Except where starting a process, every place a human is needed to keep the process going is a break in DevOps.

Configuration Management is a key component of this kind of automation, and Ansible is leading the market on this front.  NetApp not only has Ansible modules, but those modules have Red Hat certified support. I have done many blog posts showcasing these modules and some of the things you can do with them and NetApp solutions.  In this blog I am going to take this to the next level.  I am going to show how it is possible to automate the standup and/or refresh of a developer’s workspace, using a zero-size copy of a production database, all from running one command.   “ansible-playbook workspace.yml”.

Since this is going to be a multiphase workflow, that is to say VM create, storage clone, host prep, Docker install and start, I will create a simple group of Ansible roles to break out this work.

An Ansible role at its simplest is a collection of one or more playbook task lists, and a list of default variables for those tasks. The structure for this basic layout of a role is as follows

Role_directory
|_ tasks
  |_ main.yml
|_ defaults
  |_ main.yml

The tasks directory main.yml holds all the tasks that the role will run.  The defaults directory main.yml holds a list of variables with or without values assigned.  The variables defined here, can be overwritten from the playbook that calls the role.

The first step in my role to deploy a developer’s workspace is to create the VM that will be used and obtain its IP address.  I am using vCenter for my hypervisor so I will use the Ansible module `vmware_guest`.

The first thing I want to do is delete the VM if it previously exists so that I know it will be deployed with the most current version of the template and of the production data clone.

- name: Clean old vm if present
  vmware_guest:
     validate_certs: False
     hostname: "{{ vcenter }}"
     username: "{{ vm_user }}"
     password: "{{ vm_password }}"
     name: "{{ user }}_station"
     state: absent
     force: true

Assuming user: default this task looks for a VM with the name default_station and deletes it if it’s there.

There are also variables that will need to be passed to this playbook, so entries for them need to be made in the defaults/main.yml file.

vcenter:
vm_user:
vm_password:
user: default

Setting the defaults like this so far, means that the first three variables will have to be set, but if user is not set, default will be used.  You could hard code the other information as well based on your environment and security policies.

Now that I know no VM with my desired name exists I will run a task to create it brand new.

- name: Clone vm template for new workspace
  vmware_guest:
     validate_certs: False
     hostname: "{{ vcenter }}"
     username: "{{ vm_user }}"
     password: "{{ vm_password }}"
     datacenter: Datacenter
     folder: /vm
     name: "{{ user }}_station"
     template: linux_template
     state: present
     wait_for_ip_address: True

My VM template, called `linux_template` uses dhcp.  I have set `wait_for_ip_address` to true so that this task will not progress until an IP has been reported to vCenter.  Once that IP address has been obtained, I will need to know what it is for the rest of the workflow, so I run a gather facts module for vCenter VMs called `vmware_guest_facts`.

- name: Collect IP address of new VM
  vmware_guest_facts:
    hostname: "{{ vcenter }}"
    username: "{{ vm_user }}"
    password: "{{ vm_password }}"
    datacenter: Datacenter
    validate_certs: no
    name: "{{ user }}_station"
  register: vm_facts

That last line saves the array of information collected in a variable called ‘vm_facts’ for me to use with other tasks in this play.  The first thing I will do with it is create a temporary inventory entry in memory for this new system so that Ansible can connect for later steps I will run.

- name: Add host to inventory
  add_host:
    hostname: "{{ vm_facts.instance.ipv4 }}"
    groups: vms

This creates a temporary group called ‘vms’ (providing you don’t already have a group called that) and adds the IP address to that list for the inventory entry.

That is all that I need to clean and create a new VM on demand.  Now I will prep the NetApp ONTAP resources for this workflow.  The first thing I want to do is clean up any clones or policies that could exist from a previous run. These are the first three tasks in my /netapp/tasks/main.yml file.

- name: Delete volume
  na_ontap_volume:
    state: absent
    name: "{{ user }}_clone"
    vserver: "{{ vserver }}"
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false
- name: Delete Snapshot
  na_ontap_snapshot:
    state: absent
    snapshot: "clone_{{ user }}_clone.0"
    volume: "{{ master }}"
    vserver: "{{ vserver }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false
    hostname: "{{ netapp_hostname }}"
- name: Delete Policy
  na_ontap_export_policy:
    state: absent
    name: "{{ user }}_clone"
    vserver: "{{ vserver }}"
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false

This will delete any volume that has the name {{ user }}_clone as I want to be sure I am using the most current copy of production data.  I also don’t want to leave the snapshot that was created for the clone in the first place in the primary volume so I delete the snapshot that was created.  Finally, I don’t want to leave any rules around and risk access cross over, so I delete the export policy that was made for this volume.  I am also using six variables there so I need to add them to my /netapp/defaults/main.yml file

user: default
vserver:
master:
netapp_hostname:
netapp_username:
netapp_password:

Now that I know my environment is clean of the old settings, I can create my new clone, export it, and be sure its properly mounted at a junction path.  This is done with four tasks:

- name: create volume clone
  na_ontap_volume_clone:
    state: present
    vserver: "{{ vserver }}"
    parent_volume: "{{ master }}"
    volume: "{{ user }}_clone"
    space_reserve: none
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false
- name: Create Policy
  na_ontap_export_policy:
    state: present
    name: "{{ user }}_clone"
    vserver: "{{ vserver }}"
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false
- name: Setup rules
  na_ontap_export_policy_rule:
    state: present
    policy_name: "{{ user }}_clone"
    vserver: "{{ vserver }}"
    client_match: "{{ vm_facts.instance.ipv4 }}"
    ro_rule: sys
    rw_rule: sys
    super_user_security: sys
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false
- name: Create volume
  na_ontap_volume:
    state: present
    name: "{{ user }}_clone"
    policy: "{{ user }}_clone"
    junction_path: "/{{ user }}_clone"
    vserver: "{{ vserver }}"
    hostname: "{{ netapp_hostname }}"
    username: "{{ netapp_username }}"
    password: "{{ netapp_password }}"
    https: true
    validate_certs: false

Now that I have created a new VM, and I have created my volume clone, I am ready to do some host preparation on that created VM.  These steps will use Ansible’s ssh connection.  My VM template already has the public key for my Ansible host, so passwords aren’t an issue, but the host identification will have changed if this is a recreated VM. To bypass this check I either add to my global ansible.cfg file, or I create an ansible.cfg file in the directory I will run this workflow from and add these lines:

[defaults]
host_key_checking = False

To make sure my VM is prepared for the things I will need it to have and do, I add these tasks to my /linux/tasks/main.yml file:

---
- name: Add EPEL-Release
  package:
    name: "{{ item }}"
    state: latest
  loop:
  - epel-release
  - nfs-utils
  - lvm2
- name: Mount cloned volume
  mount:
    state: mounted
    path: /PV
    src: "{{ mount }}:/{{ user }}_clone"
    fstype: nfs

This will not only ensure I have the packages installed I need for Docker later and the NFS mount but will also mount that export and add it to fstab.

This one needs two variables added to linux/defaults/main.yml

mount:
user: default

Mount will be the LIF IP address of the Vserver that is used for the export volume.

Now I am ready for the Docker setup part, so in my docker/tasks/main.yml I have this:

---
- name: Add Docker repository
  get_url:
    url: https://download.docker.com/linux/centos/docker-ce.repo
    dest: /etc/yum.repos.d/docker-ce.repo
    mode: 0644

- name: Install docker-ce and docker-compose
  package:
    name: "{{ item }}"
    state: latest
  loop:
  - docker-ce
  - docker-compose

- name: Start Docker
  service:
    name: docker
    state: started
    enabled: yes

- name: Add Docker compose file
  template:
    src: docker-compose.j2
    dest: "/root/docker-compose.yml"

- name: Start Container
  command: docker-compose up -d
  args:
    chdir: /root/

- name: Wait for container to start
  pause:
    seconds: 5

This will install the repository for the Docker yum packages and install Docker community and docker-compose.  I am also copying over a template docker-compose file, starting it, and waiting 5 seconds for download and run before finishing.

The template adds another feature of roles and the Docker role in addition to its tasks directory also has a docker/templates/docker-compose.j2.  docker-compose.j2 looks like this:

blog:
  container_name: ghost
  image: "ghost"
  ports:
    - 80:2368
  volumes:
    - /PV/ghost/content:/var/lib/ghost/content
  environment:
    - url=http://{{ inventory_hostname }}

It’s just a simple docker-compose file except for that last line.  This will dynamically collect the IP address of the VM because of how our main playbook is composed.  This way each workstation comes up with a Docker environment keyed for that particular VM.


Now I just need a playbook that I run that pulls all of these roles together.

---
- name: Standup new development environment
  gather_facts: no
  hosts: localhost
  vars_files:
  - vars.yml
  roles:
  - vmware
  - netapp
- hosts: vms
  name: Host Setup
  vars_files:
  - vars.yml
  roles:
  - linux
  - docker
  tasks:
  - debug: msg="Your host is setup and ready.  It can be accessed at {{ inventory_hostname }}"

 

That’s it.  That is the WHOLE playbook that is actually ‘user’ facing.  My vars.yml file looks like this:

netapp_hostname: 172.32.0.182
netapp_username: admin
netapp_password: netapp123
user: default
vcenter: 172.32.0.16
vm_user: administrator@vsphere.local
vm_password: Netapp123!1
vserver: nfs_vserver
master: master_vol
client: 0.0.0.0
mount: 172.32.0.183

So, you can see how this goes.  I have default values for everything.  However, client is always overwritten by the collected IP from the VM.  I can also run with any user from the command line using extra vars like this.

ansible-playbook workspace.yml --extra-vars “user=david"

 

If you click <here> you can see a video of this being run and the results once the workspace is available.

Ansible can be used to drive a great deal of the workflows in a DevOps process and NetApp is ready to meet these needs.

Join me and others on Slack in our #configurationmgmt channel to share what you are doing with Ansible, or get help with issues with our modules.  If you don’t have an invite to our Slack workspace you can get one for free at www.netapp.io/slack.  Check back frequently at www.netapp.io for more information on what can be done with configuration management and Ansible as well as our solutions around containers and OpenStack.

 

Workflow chart for Ansible process

Layout of decision tree for Workspace creation playbook

 

David Blackwell on Linkedin
David Blackwell
Technical Marketing Engineer at NetApp
David is a twenty year IT veteran who has been an admin for just about every aspect of a DataCenter at one time or another. When not working, or tinkering with new software at home, David spends most of his free with his four year old son and his lovely wife.

Pin It on Pinterest