When you start a Compute instance on Exoscale, chances are that you’ll want to customize its environment by tweaking OS settings or install configure software packages. All this can be done during the first boot using cloud-init user-data, however this method has limitations:
- It can fail at runtime, leaving the Compute instance in an unknown state
- It can result in reproducibility issues, especially if you install 3rd-party software without pinning a fixed version
- Its execution effectively takes time that will extend the Compute instance bootstrap phase
- It can get challenging to maintain, depending on the complexity of the customization
A couple of months ago we released the custom templates functionality, enabling our users to register their own templates for Compute instances. This method provides a new way to customize the systems you run on Exoscale in a much more efficient way, effectively fixing most of the cloud-init user-data shortcomings.
In this article we will demonstrate how to effortlessly create custom templates using HashiCorp Packer (an industry standard tool designed to create cloud virtual machine images), and to build and register a custom template containing a web application.
As an example, we’ll use an Ubuntu Linux base, on top of which we’ll apply some OS tweaks and install our web application (a simple HTTP server that returns Hello, World!
) that will start automatically when we start a Compute instance based on it.
Packer Template
Exoscale Packer Plugin was released in the meantime.
We recommend to read the article about the Official Exoscale Packer Plugin if you want to created Exoscale custom templates based on a Compute instance snapshot.
As documented in its Getting Started guide, Packer uses the Hashicorp Configuration Language (HCL) as input to know what to do when executed: this configuration file is called a template (not to be confused with Exoscale templates).
Let’s compose a Packer template for our web application. First, we declare some variables that will be referenced throughout the rest of the configuration using the {{user ...}}
tag:
variable "api_key" { default = "" }
variable "api_secret" { default = "" }
variable "exoscale_zone" { default = "ch-gva-2" }
locals {
image_name = "hello"
iso_url = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img"
iso_checksum = "file:https://cloud-images.ubuntu.com/focal/current/SHA256SUMS"
cache_dir = try(env("PACKER_CACHE_DIR"), "${path.cwd}/packer_cache")
iso_target_path = "${local.cache_dir}/${sha1(local.iso_checksum)}.iso"
ssh_private_key_file = "~/.ssh/packer"
}
{
"variables": {
"image_url": "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img",
"image_checksum_url": "https://cloud-images.ubuntu.com/focal/current/SHA256SUMS",
"image_checksum_type": "sha256",
"image_name": "hello",
"image_output_file": "output-qemu/{{user `image_name`}}.qcow2",
"ssh_private_key_file": "~/.ssh/packer",
"exoscale_api_key": "{{env `EXOSCALE_API_KEY`}}",
"exoscale_api_secret": "{{env `EXOSCALE_API_SECRET`}}"
},
...
Next, we move on to the most important part of the template, describing which Packer Builder to use and how.
A straightforward way to build a disk image is to use the QEMU builder, which as its names implies uses QEMU to run a virtual machine locally and generate a disk image file.
This builder can be configured in fairly advanced ways, however, it is possible to achieve satisfying results with minimal configuration:
...
source "qemu" "image" {
iso_checksum = "${local.iso_checksum}"
iso_url = "${local.iso_url}"
iso_target_path = "${local.iso_target_path}"
output_directory = "${path.cwd}/output-qemu"
vm_name = "${local.image_name}.qcow2"
qemuargs = [
["-drive", "file=output-qemu/${local.image_name}.qcow2,format=qcow2,if=virtio"],
["-drive", "file=seed.img,format=raw,if=virtio"]
]
disk_image = true
disk_compression = true
disk_interface = "virtio"
disk_size = 10240
format = "qcow2"
net_device = "virtio-net"
communicator = "ssh"
ssh_private_key_file = "${var.ssh_private_key_file}"
ssh_username = "ubuntu"
use_default_display = true
shutdown_command = "rm -rf /home/ubuntu/.ssh/authorized_keys && sudo rm -rf /root/.ssh/authorized_keys && echo 'packer' | sudo -S shutdown -P now"
}
...
The main reason to use Packer is being able to create custom images, and this is done by leveraging Provisioners.
Those are sequences of actions that are performed inside a builder’s virtual machine to modify the system, and shape it in a way that will be the starting state of our future Compute instances using this custom template.
Here, we adjust the cloud-init configuration to install and configure our application so it starts automatically when the Compute instance boots:
#...
build {
sources = ["source.qemu.image"]
provisioner "shell" {
execute_command = "chmod +x {{ .Path }}; sudo {{ .Path }}"
scripts = ["scripts/install"]
}
}
#...
Here is the content of the install
script:
#!/bin/bash
set -e
cat > /etc/cloud/cloud.cfg.d/99_cloudstack.cfg <<EOF
datasource_list: [ Exoscale, NoCloud ]
EOF
#-----------------------------------------------------------
curl -sL https://github.com/rif/spark/releases/download/v1.7.3/spark_1.7.3_linux_amd64.tar.gz | tar -C /usr/bin -xzf - spark
cat > /etc/systemd/system/hello.service <<EOF
[Unit]
Description=hello
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/spark /opt/hello
TimeoutStopSec=5
[Install]
WantedBy=default.target
EOF
systemctl daemon-reload
systemctl enable hello.service
mkdir /opt/hello
echo 'Hello, World!' > /opt/hello/index.html
Finally, the post-processors
section defines post-build operations on the resulting artefact (i.e. the disk image file).
The exoscale-import
post-processor uploads the disk image to Exoscale Object Storage service and registers it as a Compute instance template:
...
source "file" "base" {
source = local.image_output_file
target = "${local.image_name}.${local.image_format}"
}
build {
sources = ["source.file.base"]
post-processor "exoscale-import" {
api_key = var.exoscale_api_key
api_secret = var.exoscale_api_secret
image_bucket = "my-templates-${var.exoscale_zone}"
template_zones = [var.exoscale_zone]
template_name = local.image_name
template_username = local.image_username
}
}
Values used in the post-processor configuration will be used across tools, as the web application or CLI, to enable features as SSH shortcuts and correctly list your template.
Before moving on to the actual template building, we have one more preparation step: in order to be able to log into the template and execute the provisioners, Packer needs the SSH public key corresponding to the private key specified the ssh_private_key_file
variable – defined in the first section of the template file – to be deployed in the build virtual machine.
With Ubuntu and Debian base images, this can be done using cloud-init’s NoCloud data source, which expects a seed file containing user-data (containing the ubuntu
user’s SSH public key) that can conveniently be generated using the cloud-localds
tool from the cloud-image-utils
package:
# Format the cloud-init user-data file
$ printf '#cloud-config\nssh_authorized_keys:\n- "%s"\n' \
"$(< ~/.ssh/packer.pub)" > user-data
# Build the seed file
$ cloud-localds seed.img user-data
Building the Exoscale Custom Template
Now that our Packer template is ready, we can run the command to execute it; depending on your local system resources, network bandwidth, and the complexity of the configuration, the process can take a while. The real-time output gives nevertheless a clear indication on what’s happening.
# Check that our template doesn't contain any syntax error
packer fmt image.pkr.hcl
# Run the Exoscale custom template creation process
$ packer build image.pkr.hcl
At this point we have our custom template registered, and we can use it to create a Compute instance that will run our application without any further action:
$ exo compute template-instance show hello \
--zone ch-gva-2 \
--visibility private
┼──────────────────┼───────────────────────────────────────────────────────┼
│ TEMPLATE │ │
┼──────────────────┼───────────────────────────────────────────────────────┼
│ ID │ 1556f3d3-4688-4389-a70b-2bea8fa78340 │
│ Zone │ ch-gva-2 │
│ Name │ hello │
│ Description │ │
│ Family │ other (64-bit) │
│ Creation Date │ 2020-04-03T15:36:40 +0200 UTC │
│ Visibility │ private │
│ Size │ 10 GiB │
│ Version │ │
│ Build │ │
│ Default User │ ubuntu │
│ SSH key enabled │ true │
│ Password enabled │ true │
│ Boot Mode │ legacy │
│ Checksum │ f9f0066c1503ad70c66b01173e6ebac5 │
┼──────────────────┼───────────────────────────────────────────────────────┼
$ exo compute instance create hello \
--zone ch-gva-2 \
--template hello \
--template-visibility private \
--security-group hello \
--ssh-key cobalt
...
$ curl http://$(exo -O text --output-template '{{.IPAddress}}' vm show hello):8080
Hello, World!
Closing Words
We hope that this practical demonstration of how to create a custom template using Packer will unlock new possibilities for you, and contribute to improve your experience on Exoscale. Note that you can find more Packer template examples on our GitHub repository.