Kubernetes gives you a great solution for managing distributed applications at scale. In this guide, I show you how to launch a Kubernetes (or k8s) cluster using Ansible.
Introducing Kubernetes
In case you’re not familiar with Kubernetes, here’s a quick introduction.
You might already have used Swarm to create a cluster of Docker hosts. You might even have done so using the Exoscale Docker machine driver. But Docker Swarm is not the only solution to orchestrate your containers and more broadly speaking to manage your distributed application in the Cloud.
Originating from Google, Kubernetes is an open re-write of Google’s internal application management system Borg. You should definitely take some time and read the paper on Borg. It contains lots of knowledge about managing systems at scale and explains some of the thinking behind Kubernetes.
Kubernetes is written in Golang and the source is available on GitHub. It typically needs a head node and a set of worker nodes. The head node runs an API server, a scheduler and a controller. The workers run an agent called kubelet and a proxy that enables service discovery across the cluster. The state of the cluster is stored in Etcd. We will leave a discussion on Kubernetes primitives for another time.
In this blog, we will use Ansible to create a Kubernetes cluster on Exoscale. The head node will also run etcd and we will start as many workers as we want.
Prerequisites
Before we get started, we need to install a few prerequisites just in case you do not have them on your machine already.
We will be using Ansible, which has a Python package that can be installed via the Python Package installer (i.e Pip).
Exoscale uses the Cloudstack API, so we’ll all use the Ansible core module for CloudStack. This module is based on a Python CloudStack API client written by the fine folks at Exoscale. This package, cs, is also available via Pip.
If you’re on Ubuntu, here’s how to instal Pip:
$ sudo apt-get install -y python-pip
On MacOS, you can use Pip to install a recent version of Python and that package includes Pip:
$ brew install python
Then use Pip to pull down the rest of the prerequesites:
$ sudo pip install ansible cs sshpubkeys
To use your Kubernetes cluster you will need a client that talks to the k8s API. This client is called kubectl. It is extremely powerfull and enjoyable to use.
From Ubuntu, and other Linuxen, you can wget the kubectl binary:
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.2.0/bin/linux/amd64/kubectl
$ chmod +x kubectl
Similarly, from MacOS you can install it using brew:
$ brew install kubectl
The last step before diving in the cluster creation is to configure cs to talk to the Exoscale API endpoint.
To do this, create a ~/.cloudstack.ini
file with your credentials and Exoscale endpoint:
[cloudstack]
endpoint = https://api.exoscale.ch/compute
key = <your api access key>
secret = <your api secret key>
method = post
You should now be all set and ready to get the Ansible playbook that will do all the work for you.
Creating the cluster
Ansible is used to configure machines. But the Cloud modules also allow us to create cloud resources such as VM instances, SSH key pairs, security groups and so on.
For example, to create a VM instance in a CloudStack cloud, your Ansible task could look like this:
- local_action:
module: cs_instance
name: web-vm-1
iso: Linux Debian 7 64-bit
hypervisor: VMware
service_offering: Tiny
disk_offering: Performance
disk_size: 20
ssh_key: john@example.com
Hence, we will be using Ansible to create the VM instances that will make our Kubernetes cluster, and we will also automatically create the correct security group rules.
The playbook makes use of CoreOS instances, and passes userdata at the instance creation step to configure them properly and start the Kubernetes services properly.
So let’s get started and clone a GitHub repo that contains a playbook to create a Kubernetes cluster on Exoscale from scratch.
git clone https://github.com/skippbox/ansible-cloudstack.git
cd ansible-cloudstack
The repository contains a main playbook k8s.yml
as well as one to remove the entire cluster k8s-remove.yml
this can be handy. Under roles/k8s/tasks
you can see the list of tasks that Ansible will perform. And in roles/k8s/templates
you can see the userdata that we will pass to the CoreOS instances. This is where you will see the definition of the systemd units that will run the various k8s components (e.g API server, kubelet)
$ tree
.
├── LICENSE
├── README.md
├── ansible.cfg
├── inventory
├── k8s-remove.yml
├── k8s.yml
└── roles
├── common
│ └── tasks
│ ├── create_sshkey.yml
│ └── main.yml
├── k8s
│ ├── tasks
│ │ ├── create_context.yml
│ │ ├── create_inv.yml
│ │ ├── create_secgroup.yml
│ │ ├── create_secgroup_rules.yml
│ │ ├── create_vm.yml
│ │ └── main.yml
│ └── templates
│ ├── inventory.j2
│ ├── k8s-master.j2
│ └── k8s-node.j2
└── k8s-remove
└── tasks
├── delete_context.yml
├── delete_inv.yml
├── delete_secgroup.yml
├── delete_secgroup_rules.yml
├── delete_vm.yml
└── main.yml
Check the main playbook and edit it to meet your needs. The playbook will create an SSH keypair for you, it will create a security group, and will start one head node and as many workers as you want.
- hosts: localhost
connection: local
vars:
ssh_key: k8s
k8s_version: v1.2.0
k8s_num_nodes: 2
k8s_security_group_name: k8s
k8s_node_prefix: foobar
k8s_template: Linux CoreOS stable 899 64-bit 50G Disk (2016-04-05-d6cdbb)
k8s_instance_type: Tiny
k8s_username: foobar
k8s_password: FdKPSuwQ
roles:
- common
- k8s
With the k8s.yml
file edited to your liking, just launch ansible and watch it do its job. The output below hides some of the steps.
$ ansible-playbook k8s.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
...
TASK [common : Create SSH Key] *************************************************
ok: [localhost -> localhost]
...
TASK [k8s : Create k8s Security Group] *****************************************
ok: [localhost -> localhost]
...
TASK [k8s : Start k8s head node] ***********************************************
changed: [localhost -> localhost]
...
TASK [k8s : debug] *************************************************************
ok: [localhost] => {
"msg": "k8s master IP is 185.19.29.73"
}
...
TASK [k8s : Start k8s nodes] ***************************************************
changed: [localhost -> localhost] => (item=1)
changed: [localhost -> localhost] => (item=2)
...
TASK [k8s : Create context] ****************************************************
changed: [localhost]
...
PLAY RECAP *********************************************************************
localhost : ok=26 changed=7 unreachable=0 failed=0
Assuming everything went fine as shown above, you should now see the instances in your Exoscale console, plus a k8s key pair and a k8s security group.
At the end of the play you will notice that the last task it did was Create context. This is a handy task that configured your local installation of kubectl to point directly to your k8s API endpoint. You can check what it is by running the kubectl config view
command. You should see an output like below:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://185.19.29.73:443
name: exo
- context:
cluster: exo
user: exo
name: exo
current-context: exo
kind: Config
preferences: {}
users:
- name: exo
user:
password: FdKPSuwQ
username: foobar
The setup currently uses basic auth but we do not setup a proper TLS handshake; pull requests are welcome to improve this.
With your k8s client configured you can check that your nodes have joined your cluster. Note however that it might take a minute or so for the nodes to reach Ready state.
$ kubectl get nodes
NAME STATUS AGE
185.19.29.115 Ready 11m
185.19.29.96 Ready 12m
Deploying an application
Now that you have a working Kubernetes cluster, it’s time to deploy an application. And that will be the subject of my next post!
Keep following this blog, or Exoscale’s Twitter account, for news of when the next post is available.