Provision Proxmox Containers with Ansible

I’ve been building a lot of virtual machines and containers on Proxmox lately.  To save some time I wrote an Ansible role to provision Proxmox containers for me, I just have to update some variables.  The role is available here:

https://github.com/engonzal/ansible_role_proxmox

Proxmox Variables

Below is a basic set of variables, note that the pve_apiuser, pve_apipass and pve_api_host are required.

pve_node: pve1
pve_apiuser: root@pam
pve_apipass: myAPIpassword
pve_api_host: pve1.domain.com
pve_hostname: "newhostname"
pve_template: local:vztmpl/debian-9.0-standard_9.5-1_amd64.tar.gz
pve_netif:
  net0: "name=eth0,gw=192.168.84.1,ip=192.168.84.36/22,bridge=vmbr0"

Proxmox Test Playbook

Now we’re going to put those variables in a playbook that will actually do something.  If you haven’t setup Ansible before, read about how to set it up with virtualenv

In the below example there are two plays, the first provisions the VM on Proxmox, the second will run a role and tasks to change things in the container you just created.

---
- name: Test CT Creation Play
  hosts: test_app
  connection: local
  user: root
    vars:
      pve_node: pve1
      pve_apiuser: root@pam
      pve_apipass: myAPIpassword
      pve_api_host: pve1.domain.com
      pve_hostname: "newhostname"
      pve_template: local:vztmpl/debian-9.0-standard_9.5-1_amd64.tar.gz
      pve_netif:
        net0: "name=eth0,gw=192.168.84.1,ip=192.168.84.36/22,bridge=vmbr0"
    roles:
    - name: engonzal.proxmox
      tags: pve
  post_tasks:
    - name: Allow CT to boot before continuing
      pause:
        seconds: 20
      when: "'running' not in pve_info_state.msg"

- name: Test CT Configuration Play
  hosts: test_app
  user: root
  vars:
    ansible_python_interpreter: /usr/bin/python3
    package_list:
      - vim
  roles:
    - name: engonzal.package
      tags: package

Create an Inventory

Create a simple inventory file named “hosts” with the following:

[test_app]
test

Running the Playbook

Now we can run the playbook!  Note the parameters used:

  • ansible-playbook:  This is the ansible playbook command
  • -i hosts: this is the inventory file with a lists of hosts to run the playbook against
  • -l test: This limits the playbook to a single host named “test”
  • test.yml: This is the path of the playbook you want to run.
(ansible-2.7) engonzal@jump1:~/git/ansible-homelab$ ansible-playbook -i hosts -l test test.yml

PLAY [Test CT Creation Play] ******************************

TASK [Gathering Facts] ************************
ok: [test]

TASK [engonzal.proxmox : Provision ct test] *********************
changed: [test -> localhost]

TASK [engonzal.proxmox : Set vmid var] ********************
ok: [test]

TASK [engonzal.proxmox : Get CT116 config] **********************
ok: [test -> pve1]

TASK [engonzal.proxmox : Manually add bind mounts to CT116] **********

TASK [engonzal.proxmox : Start CT116] ************************
changed: [test -> localhost]

TASK [Allow CT to boot before continuing] *************************
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [test]

PLAY [Test CT Configuration Play] ****************************

TASK [Gathering Facts] ****************************
ok: [test]

TASK [engonzal.package : Install General Packages] ******************
changed: [test]

TASK [engonzal.package : Install Ubuntu Packages] *******************
skipping: [test]

TASK [engonzal.package : Install CentOS Packages] *******************
skipping: [test]

PLAY RECAP *********************************
test                       : ok=8    changed=3    unreachable=0    failed=0

Mounts

This role also allows you to mount folder paths inside your containers as mounts.  In my case, I’m using CephFS which is mount on each of my Proxmox nodes at the same location.  This variable is not required and works with any location on your Proxmox node (ie nfs/zfs storage).  To use mounts, just specify the variable below:

pve_custom_mounts:
  mp0: "/mnt/pve/cephfs_data/media,mp=/media"
  mp1: "/mnt/pve/cephfs_data/downloads,mp=/downloads"

Going further:

You can start to break things out into groups like the following when you have a few different applications.  The below example highlights three files:

  • group_vars/all: these variables get applied to every host, unless you specify something in the group_vars/rolename file, or host_vars/hostname file
  • group_vars/plex: This is specific variables for my install of Plex
  • group_vars/nzbget: This is another app with a different set of variables.
# group_vars/all
pve_apiuser: engonzal@pve
pve_apipass: myAPIpassword
pve_api_host: pve1.domain.com
pve_guest_pass: myContainerRootPassword
pve_search: domain.com
pve_dns: '192.168.84.1'
pve_unprivileged: yes
pve_ssh: "ssh-rsa myPublicKey engonzal@hostname"

# group_vars/plex
pve_node: pve3
pve_vmid: 114
pve_hostname: "plex"
pve_netif:
  net0: "name=eth0,gw=192.168.84.1,ip=192.168.84.20/24,bridge=vmbr0"
pve_template: local:vztmpl/ubuntu-18.10-standard_18.10-1_amd64.tar.gz
pve_cores: 8
pve_mem: 4096
pve_custom_mounts:
  mp0: "/mnt/pve/cephfs_data/media,mp=/media"

# group_vars/nzbget
pve_node: pve3
pve_vmid: 115
pve_hostname: "nzbget"
pve_netif:
  net0: "name=eth0,gw=192.168.84.1,ip=192.168.84.21/24,bridge=vmbr0"
pve_template: local:vztmpl/ubuntu-18.10-standard_18.10-1_amd64.tar.gz
pve_cores: 2
pve_mem: 1024
pve_custom_mounts:
  mp0: "/mnt/pve/cephfs_data/media,mp=/media"
  mp1: "/mnt/pve/cephfs_data/downloads,mp=/downloads"

Note that group_vars/plex and nzbget both use variables from group_vars/all, so I can keep my Proxmox api variables separate from the container specs.  Separating things keeps my group_vars files a bit more readable.

Check out the role on GitHub or Ansible Galaxy!  Proxmox has been a lot of fun to work with.  I’ve used VMWare in the past but was frustrated by how the exciting features were behind a paywall.  Proxmox pulls together many open source technologies with an easy to use interface.  I’ve found knowing about the underlying technologies (Ceph, LXC, and others) is very valuable for my daytime job.  

Questions, problems?  Reach out on Twitter or leave a comment!