Some sites aren’t large enough to need or have an entire datacenter infrastructure. Sometimes you want to be able to run just a datacenter in a box. Remote locations, small sites, any place where you might need the entire features of a data center, but only have one box. How do you get all those features with only a single system?

We call these ‘Edge’ systems and with ONTAP Select 2.5, now you can deploy on KVM. This means you can run OpenStack and ONTAP on a single box, giving you all the advantages of both OpenStack and ONTAP.

ONTAP Select is an easily deployed data management solution that runs on your current hardware, existing servers, transforming it into a software-defined storage (SDS) infrastructure. ONTAP Select can be quickly installed on any commodity hardware and be up in running in minutes.

I am going to walk through setting up a single node like that and will use ONTAP Select as the backend for both Cinder and Manila. ONTAP Select is setup using the Select deploy KVM kvmge. All of the steps necessary for this are detailed here. More information on the setup process of Select can be found in the “Installation and Cluster Deployment Guide for KVM using Deploy 2.6” found on the NetApp Support site. OpenStack will be deployed using the RDO distribution. RDO is the upstream repository of the Red Hat OpenStack Platform. Packstack will be the tool we use to install OpenStack. Packstack is a collection of Puppet scripts that make setting up a single all in one OpenStack box trivial. Information about RDO and Packstack can be found at http://rdoproject.org. The minimum requirements for the system you set this up on are;

  • 8 CPUs (vCPUs)
  • 32 gigs of RAM
  • 2 network interfaces, the first of which should be hardcoded with an IP and the second can just be left to start, but doesn’t need an IP.
  • 1.2TB or more to present as a single device for ONTAP Select. This can be either a RAID of disks presented as a single device, or if you have enough space, a single drive, or partition on a single drive. This cannot be a LVM volume, it must be a raw SCSI device.
  • Minimum of 5 IP addresses plus a pool of how ever many you want for OpenStack instances to use.

For my setup I am using the following IPs and conventions;

  • The host OS is CentOS 7.2 with the RDO distro of OpenStack
  • My hostname is AIO.local, and I added that to /etc/hosts
  • My username is netapp
  • My device for my select storage pool is /dev/sdb
  • My first NIC is device ens160 with address 172.32.0.155
  • My second NIC is device ens192 which I will attach to my bridge, br-ex
  • My network is 172.32.0.0/24
  • My router/gateway/DNS is 172.32.0.1
  • My deploy system IP is 172.32.0.156
  • My select cluster uses IPs;
    • 172.32.0.157 – cluster management
    • 172.32.0.158 – node management
    • 172.32.0.159 – SVM LIF
  • I use 172.32.0.200-172.32.0.240 for my float IP range

There are some major differences between the CentOS repository qemu-kvm and the qemu-kvm-ev that OpenStack uses in RDO, so ONTAP Select has to be setup first.

  1. The first step is to prepare the host for running the deploy KVM conflict between deploy and qemu.
[netapp@AIO ~]$ sudo yum install -y qemu-kvm libvirt lshw lsscsi virt-install net-tools nfs-utils wget
[netapp@AIO ~]$ sudo yum install -y centos-release-openstack-ocata
[netapp@AIO ~]$ sudo yum install -y openvswitch
[netapp@AIO ~]$ sudo systemctl enable libvirtd && sudo systemctl start libvirtd
[netapp@AIO ~]$ sudo systemctl enable openvswitch && sudo systemctl start openvswitch
  1. The bridge that will be used to allow deploy, Select, and any instances that a run to be accessed from the network needs to be called br-ex because packstack looks for that later.
[netapp@AIO ~]$ sudo ovs-vsctl add-br br-ex
[netapp@AIO ~]$ sudo ovs-vsctl add-port br-ex ens192
  1. Download the Select Deploy for KVM raw image file from the NetApp support site, and move it to the host for deployment.
[netapp@AIO ~]$ scp user@host:/path/to/ONTAPdeploy2.5.raw.tgz .
[netapp@AIO ~]$ sudo mkdir -p /home/select_deploy25
[netapp@AIO ~]$ cd /home/select_deploy25/
[netapp@AIO select_deploy25]$ sudo mv ~/ONTAPdeploy2.5.raw.tgz .
[netapp@AIO select_deploy25]$ sudo tar xvzf ONTAPdeploy2.5.raw.tgz
  1. Now the deploy KVM instance can be created.
[netapp@AIO select_deploy25]$ sudo virt-install --name=deploy-kvm --vcpus=2 --ram=4096 --os-type=linux --controller=scsi,model=virtio-scsi --disk path=/home/select_deploy25/ONTAPdeploy.raw,device=disk,bus=scsi,format=raw --network "type=bridge,source=br-ex,model=virtio,virtualport_type=openvswitch" --console=pty --import --wait 0
  • If you happen to get an error like;

“ERROR unsupported configuration: CPU mode ‘custom’ for x86_64 kvm domain on x86_64 host is not supported by hypervisor
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
virsh –connect qemu:///system start deploy-kvm
otherwise, please restart your installation.”

You will need to change the KVM group to id 78 and run the virt-install command again. You can change the group id like this;

[netapp@AIO ~]$ sudo groupmod kvm -g 78
  1. Now that the deploy system is booting the console needs to be attached to and the system configured.
[netapp@AIO select_deploy25]$ sudo virsh console deploy-kvm
Network Configuration
---------------------
Host name : deploy
Use DHCP to set networking information? [n]: n
Host IP address : 172.32.0.156
Net mask : 255.255.255.0
Gateway : 172.32.0.1
Primary DNS address : 172.32.0.1
Secondary DNS address:
Please enter in all search domains separated by spaces (can be left blank):
Selected host name : deploy
Selected IP : 172.32.0.156
Selected net mask : 255.255.255.0
Selected gateway : 172.32.0.1
Selected primary DNS : 172.32.0.1
Selected secondary DNS:
Search domains :
Calculated network : 172.32.0.0
Calculated broadcast : 172.32.0.255
Are these values correct? [y]: y
Login with admin/admin123
Password change is required.
Enter current password:
Enter new password:
Retype new password:
AutoSupport Configuration
-------------------------
Enter Product Company:
Enter Proxy URL :
(ONTAPdeploy) (press ctrl+])
  1. Select relies on a virsh storage pool for its space. A virsh storage pool is just a dedicated collection of disk used by libvirt for creating LVM resources for KVM instances.
[netapp@AIO select_deploy25]$ sudo virsh pool-define-as select_pool logical --source-dev /dev/sdb --target=/dev/select_pool
[netapp@AIO select_deploy25]$ sudo virsh pool-build select_pool && sudo virsh pool-start select_pool && sudo virsh pool-autostart select_pool
  1. The web gui can be used to deploy Select or connect via ssh or the console and run the CLI commands to configure and deploy Select. I am covering the CLI instructions. I am also using the evaluation version of Select. It allows a 90 day testing period. Please refer to the ONTAP Select Administration Guide for information on how to apply your license if you purchase a license.
  • For this part, and the packstack install, root needs to have a password so that deploy can validate and install the Select instance, and puppet can ssh to itself.
[netapp@AIO select_deploy25]$ sudo passwd root
[netapp@AIO select_deploy25]$ ssh -ladmin 172.32.0.156
(ONTAPdeploy) host add --host-id 172.32.0.155 --user root

Use the host show-all command to see that the node was authenticated properly.

(ONTAPdeploy) host show-all
+--------------+---------------+------+-----------------+
|    Host      |    Status     | Type | Cluster Created |
+--------------+---------------+------+-----------------+
| 172.32.0.155 | authenticated | KVM  |      False      |
+--------------+---------------+------+-----------------+
(ONTAPdeploy) host configure --host-id 172.32.0.155 --location Earth --storage-pool select_pool --instance-type small --eval --management-network br-ex --data-network br-ex

Now that the host is configured to be able to host Select, the cluster (1 node in this case) is created.

(ONTAPdeploy) cluster create --name select --admin-password netapp123 --cluster-mgmt-ip 172.32.0.157 --netmask 255.255.255.0 --gateway 172.32.0.1 --ontap-image-version 9.2 --node-hosts 172.32.0.155 --node-mgmt-ips 172.32.0.158

This sets the admin password to netapp123, change it if desired.

After the command is run the cluster starts to deploy. This process can take from 15 minutes to 2 hours based on a number of factors. Check the status with cluster show-all.

(ONTAPdeploy) cluster show-all
+--------+-----------------+-----------+-----------------+
|  Name  |      State      | Num Nodes | Cluster Mgmt IP |
+--------+-----------------+-----------+-----------------+
| select | deploying_nodes |     1     |  172.32.0.157   |
+--------+-----------------+-----------+-----------------+
(ONTAPdeploy) cluster show-all
+--------+--------+-----------+-----------------+
|  Name  | State  | Num Nodes | Cluster Mgmt IP |
+--------+--------+-----------+-----------------+
| select | online |     1     |  172.32.0.157   |
+--------+--------+-----------+-----------------+

Once the cluster is online, exit from the ssh session.

(ONTAPdeploy) exit
  1. Now that Select is up and running, configure the host to be able to run OpenStack.
[netapp@AIO select_deploy25]$ sudo echo "net.ipv4.conf.default.rp_filter = 2" >> /etc/sysctl.conf
[netapp@AIO select_deploy25]$ sudo echo "net.ipv4.conf.all.rp_filter = 2" >> /etc/sysctl.conf
[netapp@AIO select_deploy25]$ sudo sed -i "s/^(SELINUX=).*/SELINUX=permissive/g" /etc/sysconfig/selinux
[netapp@AIO select_deploy25]$ sudo sed -i "s/^(SELINUX=).*/SELINUX=permissive/g" /etc/selinux/config
[netapp@AIO select_deploy25]$ sudo systemctl disable firewalld
[netapp@AIO select_deploy25]$ sudo systemctl stop firewalld
[netapp@AIO select_deploy25]$ sudo systemctl disable NetworkManager
[netapp@AIO select_deploy25]$ sudo systemctl stop NetworkManager
[netapp@AIO select_deploy25]$ sudo systemctl enable network
[netapp@AIO select_deploy25]$ sudo systemctl start network
[netapp@AIO select_deploy25]$ sudo reboot
  1. After the reboot, the yum update that wasn’t done earlier is run, followed by another reboot.
[netapp@AIO ~]$ sudo yum -y update
[netapp@AIO ~]$ sudo reboot

At this point Select will no longer boot and you will get this error in the qemu logs;

[netapp@AIO ~]$ sudo tail /var/log/libvirt/qemu/select-01.log
qemu-kvm: /builddir/build/BUILD/qemu-2.9.0/target/i386/kvm.c:1834: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.
2017-11-17 19:30:44.560+0000: shutting down, reason=failed

You don’t need to worry about this as this will be fixed in a later step.

  1. Now using PackStack, OpenStack will be installed on this node. Root’s local ssh key will also be added locally as a key to allow Puppet to run without asking for root’s password constantly.
[netapp@AIO ~]$ sudo yum install -y openstack-packstack crudini rusers-server
[netapp@AIO ~]$ sudo ssh-copy-id root@172.32.0.155
[netapp@AIO ~]$ sudo packstack --os-neutron-ovs-bridge-mappings=extnet:br-ex --os-neutron-ovs-bridge-interfaces=br-ex:ens192 --os-neutron-ml2-type-drivers=vxlan,flat --os-neutron-l3-ext-bridge=br-ex --provision-demo=n --os-ceilometer-install=y --os-heat-install=y --os-manila-install=y --allinone --manila-backend=netapp --manila-netapp-login=admin --manila-netapp-password=netapp123 --manila-netapp-server-hostname=172.32.0.157 --manila-netapp-vserver=openstack --manila-network-type=neutron --manila-netapp-server-port=80 --manila-netapp-transport-type=http

The PackStack commmand here sets Neutron to use the openvswitch port of br-ex with ens192 as its interface, and calls it extnet for reference later in configs. Also, the demo project and user are disabled, and the configurations for ONTAP Select as a backend for Manila are done.

  1. Now that OpenStack is installed, Select needs to be modified to run.
[netapp@AIO ~]$ sudo virsh edit select-01

Change the entry host-passthrough to host-model.

[netapp@AIO ~]$ sudo virsh start select-01
[netapp@AIO ~]$ sudo virsh console select-01

Watch till boot finishes and login.

  1. Now the Select cluster will be configured. This covers creating a data aggregate, adding a SVM and the data LIF for it, as well as activating NFS, making the necessary changes to the default export policy to allow connections, and finally allowing HTTP communication for Manila.
select::> aggr create -aggregate aggr1 -diskcount 1 -maxraidsize 1
Do you want to continue? {y|n}: y
select::> vserver create -vserver openstack -aggregate aggr1 -rootvolume openstack_root
select::> vserver add-aggregates -aggregates aggr1 -vserver openstack
select::> nfs on -vserver openstack
select::> vserver nfs start -vserver openstack
select::> net int create -vserver openstack -lif data_lif -role data -data-protocol cifs,nfs,fcache -home-node select-01 -home-port e0a -address 172.32.0.159 -netmask 255.255.255.0
select::> export-policy rule create -policyname default -vserver openstack -rorule any -clientmatch 0.0.0.0/0 -rwrule none
select::> set -priv adv
Do you want to continue? {y|n}: y
select::*> system services web modify -http-enabled true
select::*> exit
Type (ctrl+])
  1. Manila will need a few modifications still to use the backend, as well as the GUI portions will need to be added to the Horizon dashboard.
[netapp@AIO ~]$ sudo cp /root/keystonerc_admin .
[netapp@AIO ~]$ sudo chown :netapp keystonerc_admin
[netapp@AIO ~]$ sudo chmod g+r keystonerc_admin
[netapp@AIO ~]$ . keystonerc_admin
[netapp@AIO ~(keystone_admin)]$ sudo yum install openstack-manila-ui -y
[netapp@AIO ~(keystone_admin)]$ sudo systemctl restart httpd memcached
[netapp@AIO ~(keystone_admin)]$ manila type-create ontap_share false
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/manila/manila.conf DEFAULT default_share_type ontap_share
[netapp@AIO ~(keystone_admin)]$ sudo systemctl restart openstack-manila-*

When this is done verify the backend is setup properly with manila pool-list.

[netapp@AIO ~(keystone_admin)]$ manila pool-list
+------------------------+-----------+---------+-------+
|          Name          |   Host    | Backend | Pool  |
+------------------------+-----------+---------+-------+
| AIO.local@netapp#aggr1 | AIO.local | netapp  | aggr1 |
+------------------------+-----------+---------+-------+
  1. So that as much as possible is managed through OpenStack, the Cinder backend NFS export will be created using Manila. This will allow administrators to resize the Cinder volume from the OpenStack interface. There is also the added bonus that this verifies Manila is working properly.
[netapp@AIO ~(keystone_admin)]$ manila create --name cinder --description "cinder backend" nfs 50
[netapp@AIO ~(keystone_admin)]$ manila access-allow cinder ip 172.32.0.155
Use manila show cinder to view the details about the share and make note of the path
[netapp@AIO ~(keystone_admin)]$ manila show cinder
+---------------------------------------+-----------------------------------------------------------------+
|               Property                |                              Value                              |
+---------------------------------------+-----------------------------------------------------------------+
|                status                 |                             available                           |
|           share_type_name             |                            ontap_share                          |
|             description               |                           cinder backend                        |
|          availability_zone            |                               nova                              |
|           share_network_id            |                               None                              |
|           export_locations            |                                                                 |
|                                       | path = 172.32.0.159:/share_1bd81f8e_92d0_4eca_acac_8e64a3c1fb4c | -< You need this export path
|                                       |                        preferred = True                         |
|                                       |                      is_admin_only = False                      |
|                                       |            id = 1c2fb25c-7dc7-473b-a213-aeb8e7dd826b            |
|                                       |     share_instance_id = 1bd81f8e-92d0-4eca-acac-8e64a3c1fb4c    |
|            share_server_id            |                               None                              |
|             share_group_id            |                               None                              |
|                 host                  |                         AIO.local@netapp#aggr1                  |
|      revert_to_snapshot_support       |                              False                              |
|          access_rules_status          |                              active                             |
|              snapshot_id              |                               None                              |
|   create_share_from_snapshot_support  |                              False                              |
|               is_public               |                              False                              |
|              task_state               |                               None                              |
|           snapshot_support            |                              False                              |
|                  id                   |               9d1b42f0-edfa-4480-9046-cb9b7a96b0e4              |
|                 size                  |                               50                                |
| source_share_group_snapshot_member_id |                              None                               |
|               user_id                 |                 7c794fe9244e404b9fc513746a34836f                |
|                 name                  |                         cinder                                  |
|              share_type               |               11a22b6f-973f-43e6-8906-9cc47f5e7c6a              |
|             has_replicas              |                              False                              |
|           replication_type            |                               None                              |
|              created_at               |                    2017-11-18T18:05:09.000000                   |
|              share_proto              |                               NFS                               |
|        mount_snapshot_support         |                              False                              |
|              project_id               |                 346d12c154f546a7a4ddf33116cbd886                |
|               metadata                |                                {}                               |
+---------------------------------------+-----------------------------------------------------------------+
  1. Using the path from step 14 Cinder can now be configured to use a FlexVol from ONTAP Select as its backend storage.
[netapp@AIO ~(keystone_admin)]$ sudo mount 172.32.0.159:/share_1bd81f8e_92d0_4eca_acac_8e64a3c1fb4c /mnt/tmp
[netapp@AIO ~(keystone_admin)]$ sudo chown cinder: /mnt/tmp
[netapp@AIO ~(keystone_admin)]$ sudo umount /mnt/tmp
[netapp@AIO ~(keystone_admin)]$ sudo bash -c "echo '172.32.0.159:/share_1bd81f8e_92d0_4eca_acac_8e64a3c1fb4c' >> /etc/cinder/shares.conf"
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp volume_backend_name netapp
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp volume_driver cinder.volume.drivers.netapp.common.NetAppDriver
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp netapp_server_hostname 172.32.0.157
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp netapp_storage_protocol nfs
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp netapp_storage_family ontap_cluster
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp netapp_login admin
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp netapp_password netapp123
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp netapp_vserver openstack
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp nfs_shares_config /etc/cinder/shares.conf
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp nas_secure_file_permissions false
[netapp@AIO ~(keystone_admin)]$ sudo crudini --set /etc/cinder/cinder.conf netapp nas_secure_file_operations false
[netapp@AIO ~(keystone_admin)]$ cinder type-create ontap
[netapp@AIO ~(keystone_admin)]$ cinder type-key ontap set volume_backend_name=netapp
[netapp@AIO ~(keystone_admin)]$ cinder type-delete iscsi
[netapp@AIO ~(keystone_admin)]$ sudo systemctl restart openstack-cinder-*
  1. Test that the Cinder backend is working
netapp@AIO ~(keystone_admin)]$ openstack volume create --size 1 test
[netapp@AIO ~(keystone_admin)]$ sudo ls /var/lib/cinder/mnt/55cfd26877dce863328b8daf383a023f/ (your serial number will vary)

There should be a file in the Cinder mnt named the same as the test volumes id number.  After you verify that the volume was indeed created, it can be deleted.

[netapp@AIO ~(keystone_admin)]$ openstack volume delete test
  1. Networking for the provider network and an image to test with are added now.
[netapp@AIO ~(keystone_admin)]$ neutron net-create external_network --provider:network_type flat --provider:physical_network extnet --router:external
[netapp@AIO ~(keystone_admin)]$ source ~/keystonerc_admin; neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=172.32.0.200,end=172.32.0.240 --gateway=172.32.0.1 external_network 172.32.0.0/24 [netapp@AIO ~(keystone_admin)]$ curl http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img | glance image-create --name='cirros image' --visibility=public --container-format=bare --disk-format=qcow2
  1. Lastly a project for use by end users is created. These steps can be run every time a new project is needed. Be sure to replace all instances of the word “test” with the name of your project, and the password with the password you would like the default user to have. That password is “temp” in this example.  The user is created with the same name as the project. Note that ‘default’ cannot be used as a project name.

Also note that the exported values for some of the OS variables are being changed so that commands will be run as the temp user.  If you want to go back to full admin rights after, you will need to source the keystonerc_admin file again.

[netapp@AIO ~(keystone_admin)]$ openstack project create --enable test
[netapp@AIO ~(keystone_admin)]$ openstack user create --project test --password temp --email root@localhost --enable test
[netapp@AIO ~(keystone_admin)]$ openstack role add --user test --project test _member_
[netapp@AIO ~(keystone_admin)]$ export OS_USERNAME=test
[netapp@AIO ~(keystone_admin)]$ export OS_PROJECT_NAME=test
[netapp@AIO ~(keystone_admin)]$ export OS_PASSWORD=temp
[netapp@AIO ~(keystone_admin)]$ neutron router-create router_test
[netapp@AIO ~(keystone_admin)]$ neutron router-gateway-set router_test external_network
[netapp@AIO ~(keystone_admin)]$ neutron net-create test_network
[netapp@AIO ~(keystone_admin)]$ neutron subnet-create --name test_subnet --dns-nameserver 8.8.4.4 --gateway 192.168.100.1 test_network 192.168.100.0/24
[netapp@AIO ~(keystone_admin)]$ neutron router-interface-add router_test test_subnet
    • If the system ever has to be rebooted, wait for ONTAP Select to fully boot and then restart the Cinder and Manila services
[netapp@AIO ~]$ sudo systemctl restart openstack-cinder-* openstack-manila-*

Now there is a running single node system that can provide compute, shares, and block storage for an Edge private cloud. In addition, SnapMirror can be used to backup all the Manila shares to any other ONTAP system, including ONTAP Cloud, and access them. Also, the Cinder FlexVol can be backed up and, if needed, the presented Cinder volumes can be mounted as loop back devices remotely to get read only access to them in case of disaster, or verifications.

For more information on ONTAP Select visit the Select information site on NetApp’s website at http://www.netapp.com/us/products/data-management-software/ontap-select-sds.aspx

In a follow up post on the topic of this Edge system, I will demonstrate how you can use ONTAP Select to manage the configurations, lib files, and database for OpenStack making restoring or rebuilding an edge node back to its full configuration, with all shares, volumes, and instances a minor matter.