Intro

I’ve been doing DevOps work for a few months in my day job and it’s been an awesome experience. I have most of my experience in doing things kinda “by hand”. I would use hypervisor tools and interfaces to create virtual machines, log into those machines, and just follow documentation to get things going manually. This was fine when I didn’t have that many servers to manage. As things got larger and larger, I had to find a better way. Sure there’s the good ol’ bash script and such, but those aren’t particularly scalable and I really wanted to make my life easier.

Enter Ansible.

Ansible

Ansible is a configuration management application that allows me to make changes to an existing server without even logging into the server. Ansible works via SSH and an SSH key infrastructure. This is great since I’d hate to have to manage passwords for these things. It also means I can just disable password login for the servers for a bit of extra security.

All of my SSH keys have passphrases. This kinda trips up Ansible in that it will ask you the passphrase for each step in a role/playbook. When you’re trying to run a multitude of tasks on a multitude of machines, this isn’t practical. So, I use ssh-agent to unlock my SSH keys when I need to use Ansible. This is fairly straightforward for a single key setup.

ssh-agent bash
ssh-add
<enter passphrase for key>

After you’ve done that, you can run your Ansible Playbooks. I’m not going to go in depth on what you can do here, but I’ll give an example YAML Ansible Playbook.

- hosts: localhost

  tasks:
  - name: Rename Computer Hostname
    hostname:
      name: computer01

Running this playbook will connect to the host localhost and change the hostname to computer01. This is a fairly simple change, but does require root privileges. Now, there are a couple ways to do this and I’ll go over the simpler option here. I’ll go into encrypting passwords with Ansible Vault in another post later.

So, here’s where I have to introduce the idea of an inventory file. An inventory file is simply a file containing a group designation surrounded in brackets [localhost] and the IP addresses or fully qualified domain names of the hosts in that group. Here you will also pass in extra variables for that group of hosts such as the user to connect as, the method for gaining root, and the password for doing so. In Ansible, the term become is used for doing things as root.

[localhost:vars]
ansible_user = user
ansible_become_method = sudo
ansible_become_pass = "password123"

[localhost]
localhost

Now that we have a playbook and an inventory file we can run it using the ansible-playbook command.

ansible-playbook -i inventory rename-hostname.yml

If you’ve unlocked your key as described above you should start to see things going by. Ansible will display Changed as something that was successfully changed and OK as something that didn’t need to be changed. If things didn’t go well, there is generally a lengthy error message that would give you an idea of what you need to do to resolve it. It really is a great tool.

Terraform

Now, my experience with Terraform is still very little. I’ve been using it for about a week at the time of writing, but I got a nice grasp of it. Terraform is a tool for provisioning infrastructure, but with code! This tool allows you to build virtual machines, docker containers, cloud instances, and lots of other things. I use it for creating docker containers on servers and creating virtual machines on our hypervisor at work. Today my example will be setting up a Murmur (Mumble voice server) docker container on a host.

Terraform works with two primary data types: providers and resources

Providers are the overall plugin or module that you’ll be using for the resources. These can include docker, proxmox, aws, azure, etc. You have to acquire these providers from their sources (usually GitHub) and copy them into your Terraform plugins directory. Terraform uses Go for the plugins, so we can easily get them. This is the command to get the Docker provider and how to copy it into the right directory.

mkdir -p ~/.terraform.d/plugins/
go get github.com/terraform-providers/terraform-provider-docker
cp ~/go/bin/terraform-provider-docker ~/.terraform.d/plugins

Resources are the units of infrastructure you’ll be creating like docker_container or proxmox_vm_qemu. I’ll show you an example main.tf file for building that Murmur docker container and we’ll go over it.

provider "docker" {
    host = "tcp://docker.host.com:2376"
}

resource "docker_image" "murmur-server" {
  name = "mattikus/murmur"
}

resource "docker_container" "murmur" {
    name = "murmur"
    image = "${docker_image.murmur-server.latest}"
    start = true
    restart = "unless-stopped"

    ports {
        internal = "64738"
        external = "64738"
    }

    ports {
        internal = "64738"
        external = "64738"
        protocol = "udp"
    }

    volumes {
        host_path = "/home/adam/murmur.ini"
        container_path = "/etc/murmur.ini"
    }

    volumes {
        host_path = "/murmur-data"
        container_path = "/data"
    }
}

Here we load the docker provider at the top of the file. In that provider block we give the host we’ll be connecting to for creating the container.

Under that we start creating the resources for this deployment. The first of which is a docker_image resource. This is going to provide us with the exact image we need to use for creating this. Docker will automatically pull from Docker Hub, so that’s what we’re doing here. We give the name of the image from Docker Hub.

For the main event, we create a resource docker_container named murmur for creating the actual container. We give the actual container a name of murmur, we grab the image as a variable from the image resource we created earlier and specify to get the latest version of it. We want the container to start immediately and restart any time it stops unless we manually stop it. We publish the ports for both TCP, the default, and UDP. These ports need to be accessible on the host server for this to work properly. We specify that the murmur.ini file on our local server be passed into the container allowing us to make config changes without having to redeploy the container, just restart it. We also tell the container to store the /data directory in the containre on the host path /murmur-data. This allows us to upgrade the container without losing our sqlite database with our server data between upgrades.

This is all good to go now. Now we need to initialize Terraform for the plan.

terraform init

This will scan for plugins for the providers that are required and set things up in the current directory for terraform to run properly. Now we need to create a plan for Terraform to run against.

terraform plan out=murmur-docker-plan

This will run the command to check our main.tf file and create a deployment plan file called murmur-docker-plan that we can apply.

terraform apply murmur-docker-plan

After running this, Terraform will connect to the API on the specified host, pull down the Docker image, and deploy a new container with the parameters we set.

Conclusion

I know this example was a simple one, but the implications are pretty great. Using this you can easily create large deployments of containers or virtual machines with Terraform and then set your configs or install applications with Ansible. Having these as code allows them to be auditable by those who need it, they can be written to be modular with variable files and secure with Ansible Vault. They also provide a set of instructions that are reproducable. If you’ve never looked into DevOps tools like these, I think this should give you something to start with. Good luck and feel free to reach out to me if you want any more information!