In this guide, I’m going to show how to deploy Nginx servers in just a few seconds using Ansible.
We’ll deploy the servers to compute instances running on Exoscale. Ansible is particularly well suited to Exoscale, as since version 2.0 it has a module for the Cloudstack API that underlies Exoscale’s VM provisioning.
If you want to learn more about the Exoscale API, take a look at my previous blog post. You might also be interested in our guide to deploying a Kubernetes cluster using Ansible.
Requirements
First, get access to the playbooks we’re going to use by cloning the repository:
$ git clone https://github.com/MBuffenoir/ansible-exoscale-nginx.git
$ cd ansible-exoscale-nginx
All the tools you need (Ansible and cs
) are listed in the requirements.txt
file and you can install them with a simple:
$ pip install -r requirements.txt
If this is the first time you’ve used the cs
tool to work with Exoscale’s API from the command line, you’ll need to export the following environment variables:
CLOUDSTACK_ENDPOINT="https://api.exoscale.ch/compute"
CLOUDSTACK_KEY="your api key"
CLOUDSTACK_SECRET_KEY="your secret key"
EXOSCALE_ACCOUNT_EMAIL="your@email.net"
Another approach is to set up a ~/.cloudstack.ini
file in your home directory, with a format like this:
[cloudstack]
endpoint = https://api.exoscale.ch/compute
key = EXOmyownkey
secret = mysecret
Take a look at the cs
documentation for more details.
A quick tour of Ansible
Ansible is meant to be run from a management computer (like a bastion for example) and it takes its instructions from text files called playbooks.
The playbooks contain a succession of instructions in YAML format. Those instructions can be run on localhost or remote machines. Ansible is agentless so all it needs is a functionning SSH connection to a machine in order to execute commands on it.
Our goal in this example is to run two playbooks:
- The first one creates the SSH keys, a specific network security group and Ubuntu virtual machines.
- The second one installs our webserver services via apt in those previously created virtual machines.
Ansible project organization
Let’s analyze a little bit the content of our project:
├── ansible.cfg
├── create-instances-playbook.yml
├── install-nginx-playbook.yml
├── inventory
└── roles
├── common
│ └── tasks
│ ├── create_sshkey.yml
│ └── main.yml
└── infra
├── tasks
│ ├── create_inv.yml
│ ├── create_secgroup.yml
│ ├── create_secgroup_rules.yml
│ ├── create_vm.yml
│ └── main.yml
└── templates
└── inventory.j2
Our two playbooks are sitting at the root of the project.
If we dig a little bit into the create-instances-playbook.yml
file, we will see several interesting things:
- Variables that help define our infrastructure (you can modify this if need be).
- A list of roles to apply.
Roles are defined in folders and contain list of playbooks, described in the main.yml
at the root of each folder. Those playbooks are executed in the order they’re listed in those main.yml
files. You could view them as a “sub-playbook” that helps to keep your project organized and reusable.
Instance creation on Exoscale
In our create-instances-playbook.yml
file we will find a vars
section that defines the number of webservers we want to deploy and the kind of Linux distribution to be installed:
vars:
ssh_key: nginx
num_nodes: 2
security_group_name: nginx
template: Linux Ubuntu 16.04 LTS 64-bit
template_filter: featured
instance_type: Tiny
root_disk_size: 10
zone: ch-gva-2
Of course, you can modify those values according to your preferences.
Now, to run the playbook we use the ansible-playbook
command:
$ ansible-playbook create-instances-playbook.yml
You should now see your playbook being run and the different roles referenced in it being called sequentially.
Here’s what it looks like (some parts have been pruned):
[...]
TASK [common : include] ********************************************************
included: /Users/lalu/dev/exoscale/ansible/nginx/roles/common/tasks/create_sshkey.yml
for localhost
TASK [common : Create SSH Key] *************************************************
ok: [localhost -> localhost]
TASK [common : debug] **********************************************************
ok: [localhost] => {
"msg": "private key is -----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQCOkc5K8TbK[...]
-----END RSA PRIVATE KEY-----\n"
}
[...]
TASK [infra : include] *********************************************************
included: /Users/lalu/dev/exoscale/ansible/nginx/roles/infra/tasks/create_secgroup.yml
for localhost
TASK [infra : Create Security Group] *******************************************
changed: [localhost -> localhost]
TASK [infra : include] *********************************************************
included: /Users/lalu/dev/exoscale/ansible/nginx/roles/infra/tasks/create_secgroup_rules.yml
for localhost
[...]
TASK [infra : include] *********************************************************
included: /Users/lalu/dev/exoscale/ansible/nginx/roles/infra/tasks/create_vm.yml
for localhost
TASK [infra : Create instances] ************************************************
changed: [localhost -> localhost] => (item=1)
changed: [localhost -> localhost] => (item=2)
TASK [infra : include] *********************************************************
included: /Users/lalu/dev/exoscale/ansible/nginx/roles/infra/tasks/create_inv.yml
for localhost
TASK [infra : Create inventory file] *******************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=12 changed=6 unreachable=0 failed=0
As we can see in that last part, the play recap, six changes have been completed.
In our Exoscale console we can now see:
- a new ssh public key (the private key being in your
.ssh
folder) - a new security group and rules
- two new instances.
It’s good to know that Ansible playbooks are idempotent. So, you can run them as many times as you want and, if the required state is already met, no change will be made.
Let’s try it to make sure. If we run our playbook again, we’ll a get a recap looking like this and nothing will have changed:
PLAY RECAP *********************************************************************
localhost : ok=12 changed=0 unreachable=0 failed=0
Verifying our new instances
To check the instances are live, let’s use Ansible’s ping module:
$ ansible all -m ping
The ping module does more than just a regular ping: it uses the ssh key defined in your ansible.cfg
file and ensures that Ansible can run commands on the remote host. If everything is OK it returns something similar to:
185.19.x.x | SUCCESS => {
"changed": false,
"ping": "pong"
}
185.19.x.x | SUCCESS => {
"changed": false,
"ping": "pong"
}
Now you should also be able to connect to your newly created instances with:
$ ssh -i ~/.ssh/id_rsa_nginx root@<instance-ip-address>
Start an Nginx webserver on the created Instances
Now that our infrastructure is set up, let’s see how we can use Ansible to start services.
A quick look into our second playbook reveals one simple task:
- name: Install nginx web server
apt: name=nginx state=latest update_cache=true
Since we installed Ubuntu on our machines, we chose to use the apt module from the Ansible collection.
Of course Ansible supports all major distribution package managers, so we could easily adapt this to use something other than apt.
Running ansible-playbook install-nginx-playbook.yml
gives us the following recap:
PLAY RECAP *********************************************************************
185.19.x.x : ok=2 changed=1 unreachable=0 failed=0
185.19.x.x : ok=2 changed=1 unreachable=0 failed=0
And we can now test that our webservers are live with a simple:
$ curl http://185.19.x.x
Next steps
Of course the possibilities with Ansible are endless and you can really see the benefits once you write your own modules.
A fun exercise is to start by writing some playbooks to do some routine commands on the instances, like upgrading the system, ensuring the webservers are correctly started, etc.
If you haven’t already, next you can try deploying a Kubernetes cluster with Ansible.