AWX the OpenSource upstream version of Controller (formally called Ansible Tower) has gone through a lot of changes recently.  These changes have made the install and setup process much more difficult.  The new install method also requires Kubernetes, which not everyone wants to run or wants to run AWX that way.  There has been a lot of ask I have seen from end users for a single node install of AWX.  Therefor I have created a new process to assure proper setup on a single node.

Requirements
– Minimum four cpu cores
– Minimum 6 gigs RAM
– Docker – https://docs.docker.com/get-docker/
– No other containers running on host.  AWX must be the only Dockerized function.

That’s correct, the only software requirement is Docker.  The method I have created will use a custom Docker image to install Kubernetes in Docker, refered to as ‘kind’, on the host with Docker.  Proper ingress will be set up so that AWX can be reached via a FQDN which is required for proper use of AWX now.  There are even methods for backing up the AWX configuration using the image or remove kind entirely.

*Note about kind: kind is a tool for running local Kubernetes clusters using Docker container “nodes”.  kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

I am going to be running commands as a standard user that has Docker rights, however if you run as root the same process will happen.

First, I am going to make a directory where my Kubernetes config file can be stored.

$ mkdir /home/user/kind_awx
$ cd /home/user/kind_awx

Replace /home/user/kind_awx with where you want to keep this file.  All further commands will always need to be run while in that directory.

$ docker run --rm --name kind_deploy -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/root/.kube/ -it schmots1/kind_awx

This command will download the Docker image and create a temporary container that has access to the hosts Docker socket as well as mapping the directory you are in to the /root/.kube directory in the temp container for the Kubernetes config file and the awx.yml operator file.  Once that has happened the container will deploy kind into a container called kind-control-plane and install AWX in that Kubernetes environment.

kind create cluster --image kindest/node:v1.19.11 --config kind.yml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.19.11) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kindHave a nice day! 👋
sed -i "s/^    server:.*/    server: https:\/\/172.17.0.2:6443/" /root/.kube/config
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
kubectl apply -f password.yml
secret/awx-admin-password created
kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/0.12.0/deploy/awx-operator.yaml
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
clusterrole.rbac.authorization.k8s.io/awx-operator created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator created
serviceaccount/awx-operator created
deployment.apps/awx-operator created
FQDN: awx.example.com
kubectl apply -f awx.yml
awx.awx.ansible.com/awx created

You will be prompted for the FQDN you will be using for AWX.  This should be something in your DNS, but you could also edit your /etc/hosts file to point that domain at this host.

In my example above I used awx.example.com for my FQDN so I added this line to my /etc/hosts since it’s not in my DNS

172.31.199.143           awx.example.com

This lets me go to https://awx.example.com and be routed to the host running kind on 172.31.199.143.  You would need to use your hosts IP address.

The initial setup of AWX takes 5-10 minutes.  You will know it’s done when https://awx.example.comdisplays the login screen.

The default login is:
U: admin
P: password

AWX now uses a new concept called Execution Environments (EEs).  This means that you can update the Ansible engine without having to update AWX, or even run multiple versions of the engine or collections.  The downside to this is that the default system EE that AWX uses does not work with NetApp collections.  Fortunately, there is an EE that does.

Log in to AWX and navigate to the Execution Environments section

Here you will add a new EE with the following settings

Any Job Templates you create to run playbooks that contain NetApp modules you will have run against this EE.

Be aware however, this setup is not without its issues.  First and foremost is persistent data of what happens in Tower.  The data is stored locally somewhere in /var/lib/docker/volumes.  You can run `docker volume ls` and then `docker volume inspect <uuid of volume>` to get the full path.

The next issue is that since this Kubernetes is inside of Docker, a reboot of the host stops the control plane and  breaks the ingress due to the internal Docker network changing slightly.  There is a fix included with the setup container though.  If your AWX host happens to reboot run the following from within the directory you created earlier.

$ docker run --rm --name kind_deploy -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/root/.kube/ -it schmots1/kind_awx fix

It can take a minute or so for the control plane to restart and the ingress routing to finish updating.

If you break something and want to start all over you can run this command to uninstall kind.

$ docker run --rm --name kind_deploy -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/root/.kube/ -it schmots1/kind_awx clean

As usual if you have any questions, please join us on our Slack workspace and the #configurationmgmt channel.  Don’t have an invite to our Slack still?  Get one at www.netapp.io/slack

 

 

 

David Blackwell on Linkedin
David Blackwell
Technical Marketing Engineer at NetApp
David is a twenty year IT veteran who has been an admin for just about every aspect of a DataCenter at one time or another. When not working, or tinkering with new software at home, David spends most of his free time with his six year old son and his lovely wife.

Pin It on Pinterest