Long ago I had the not so uncommon idea of running my own home lab. Unfortunately, I always found some excuses like not having enough time, or that I don’t want to buy hardware, etc. This year, after some thought I bought a Beelink Mini-PC with a decent amount of resources and the path was clear. Proxmox -> VMs -> Kubernetes -> Workloads and of course all of that with all the different automation, IaC tools I could put my hands on.

This probably would be a series of Posts, but for now let’s start with the baseline. Also, you can find all the code at my home-lab repo.

Proxmox

As a hypervisor for virtualization I chose Proxmox. The reason, it’s open source, based on Debian and it doesn’t have anything lacking than a super expensive enterprise solution!

Proxmox has that thingy called templates, it’s like a golden image that you can clone different VMs from. Also, it supports Cloud-init. So the approach was easy, spin up a Ubuntu VM, install the prerequisites I need to run a Production grade Kubernetes cluster, and convert that to a template so I can clone VMs from it.

Packer

Why do that manually when you can use Packer to build the Golden Image?! Packer introduce itself as an Images as Code tool, so the idea is that you write “guidelines” (scripts, file copies, etc) and then Packer is responsible for creating the image, i.e. the VM template when working with Proxmox.

Below are the files that are used to create a VM template based on Ubuntu 22.04 with all the basic components needed for Kubernetes. The folder structure is like the following:

└── ubuntu-k8s-22_04
    ├── build.auto.pkrvars.hcl
    ├── build.pkr.hcl
    ├── config.pkr.hcl
    ├── files
    │   └── 99-pve.cfg
    └── http
        ├── meta-data
        └── user-data

Below is the config.pkr.hcl in which we just “config” packer with necessary variables and the plugin to use, in this case the proxmox one.

packer {
  required_plugins {
    proxmox = {
      version = ">= 1.1.3"
      source  = "github.com/hashicorp/proxmox"
    }
  }
}

variable "proxmox_api_url" {
  type = string
}

variable "proxmox_api_token_id" {
  type = string
}

variable "proxmox_api_token_secret" {
  type      = string
  sensitive = true
}

variable "ssh_username" {
  type = string
}

variable "ssh_password" {
  type      = string
  sensitive = true
}

variable "http_server_address" {
  type = string
}

Here’s the actual build script named build.pkr.hcl, which gives all the guidelines to connect to Proxmox API, creates a VM based on an already uploaded ISO file, installs all the necessary packages and converts that to a VM template.

source "proxmox-iso" "ubuntu-22_04" {
  proxmox_url              = "${var.proxmox_api_url}"
  username                 = "${var.proxmox_api_token_id}"
  token                    = "${var.proxmox_api_token_secret}"
  insecure_skip_tls_verify = true

  node                 = "proxmox-01"
  vm_id                = "90102"
  vm_name              = "ubuntu-k8s-tmpl-01"
  template_description = "A Ubuntu 22.04 cloud-init enabled template, ready for Kubernetes 1.29."

  iso_file         = "local:iso/ubuntu-22.04.3-live-server-amd64.iso"
  iso_storage_pool = "local"
  unmount_iso      = true
  qemu_agent       = true

  scsi_controller = "virtio-scsi-pci"

  cores   = "2"
  sockets = "2"
  memory  = "4096"

  cloud_init              = true
  cloud_init_storage_pool = "local-lvm"

  vga {
    type = "virtio"
  }

  disks {
    disk_size    = "30G"
    format       = "raw"
    storage_pool = "local-lvm"
    type         = "virtio"
  }

  network_adapters {
    model    = "virtio"
    bridge   = "vmbr0"
    firewall = "false"
  }

  boot_command = [
    "<esc><wait>",
    "e<wait>",
    "<down><down><down><end>",
    "<bs><bs><bs><bs><wait>",
    "autoinstall ds=nocloud-net\\;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ ---<wait>",
    "<f10><wait>"
  ]

  boot         = "c"
  boot_wait    = "6s"
  communicator = "ssh"

  http_directory    = "ubuntu-k8s-22_04/http"
  http_bind_address = "${var.http_server_address}"

  ssh_username = "${var.ssh_username}"
  ssh_password = "${var.ssh_password}"

  # Raise the timeout, when installation takes longer
  ssh_timeout            = "30m"
  ssh_pty                = true
  ssh_handshake_attempts = 15
}

build {

  name = "pkr-ubuntu-jammy-1"
  sources = [
    "source.proxmox-iso.ubuntu-22_04"
  ]

  # Waiting for Cloud-Init to finish
  provisioner "shell" {
    inline = ["cloud-init status --wait"]
  }

  # Provisioning the VM Template for Cloud-Init Integration in Proxmox #1
  provisioner "shell" {
    execute_command = "echo -e '<user>' | sudo -S -E bash '{{ .Path }}'"
    inline = [
      "echo 'Starting Stage: Provisioning the VM Template for Cloud-Init Integration in Proxmox'",
      "sudo rm /etc/ssh/ssh_host_*",
      "sudo truncate -s 0 /etc/machine-id",
      "sudo apt -y autoremove --purge",
      "sudo apt -y clean",
      "sudo apt -y autoclean",
      "sudo cloud-init clean",
      "sudo rm -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg",
      "sudo rm -f /etc/netplan/00-installer-config.yaml",
      "sudo sync",
      "echo 'Done Stage: Provisioning the VM Template for Cloud-Init Integration in Proxmox'"
    ]
  }

  # Provisioning the VM Template for Cloud-Init Integration in Proxmox #2
  provisioner "file" {
    source      = "ubuntu-k8s-22_04/files/99-pve.cfg"
    destination = "/tmp/99-pve.cfg"
  }
  provisioner "shell" {
    inline = ["sudo cp /tmp/99-pve.cfg /etc/cloud/cloud.cfg.d/99-pve.cfg"]
  }

  # Disable swap
  provisioner "shell" {
    inline = [
      "sudo swapoff -a",
      "sudo sed -i '/swap/ s/^/#/' /etc/fstab"
    ]
  }

  # Add kernel parameters
  provisioner "shell" {
    inline = [
      "sudo tee /etc/modules-load.d/containerd.conf <<EOF",
      "overlay",
      "br_netfilter",
      "EOF",
      "sudo modprobe overlay",
      "sudo modprobe br_netfilter"
    ]
  }

  # Configure Kubernetes related kernel parameters
  provisioner "shell" {
    inline = [
      "sudo tee /etc/sysctl.d/kubernetes.conf <<EOF",
      "net.bridge.bridge-nf-call-ip6tables = 1",
      "net.bridge.bridge-nf-call-iptables = 1",
      "net.ipv4.ip_forward = 1",
      "EOF",
      "sudo sysctl --system"
    ]
  }

  # Install containerd & docker, also configure containerd runtime to use systemd as cgroup
  provisioner "shell" {
    inline = [
      "sudo apt-get remove -y needrestart", # TODO, needs to be revised with `DEBIAN_FRONTEND=noninteractive` or other envs
      "sudo apt-get update",
      "sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates gpg",
      "sudo install -m 0755 -d /etc/apt/keyrings",
      "sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc",
      "sudo chmod a+r /etc/apt/keyrings/docker.asc",
      "echo \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null",
      "sudo apt-get update",
      "while lsof /var/lib/dpkg/lock-frontend ; do sleep 10; done;",
      "sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin",
      "containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1",
      "sudo sed -i 's/SystemdCgroup \\= false/SystemdCgroup \\= true/g' /etc/containerd/config.toml",
      "sudo systemctl restart containerd",
      "sudo systemctl enable containerd"
    ]
  }

  # Install k8s stuff (kubectl, kubeadm, kubelet) to be ready for provisioning a Kubernetes cluster
  provisioner "shell" {
    inline = [
      "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg",
      "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list",
      "sudo apt update",
      "while lsof /var/lib/dpkg/lock-frontend ; do sleep 10; done;",
      "sudo apt install -y kubelet kubeadm kubectl",
      "sudo apt-mark hold kubelet kubeadm kubectl"
    ]
  }
}

Even though we are using the same template, we still need some flexibility for configuring some parameters, when we are creating new VMs. For example provide input during the VM creation for IP configuration, creating a new user with sudo privileges, etc.

#cloud-config
autoinstall:
  version: 1
  locale: en_US
  refresh-installer:
      update: false
  keyboard:
    layout: us
  ssh:
    install-server: true
    allow-pw: true
    disable_root: true
    ssh_quiet_keygen: true
    allow_public_ssh_keys: true
  storage:
    layout:
      name: direct
    swap:
      size: 0
  network:
    network:
      version: 2
      ethernets:
        ens18:
          dhcp4: no
          addresses: [192.168.1.247/24]
          gateway4: 192.168.1.1
          nameservers:
            addresses: [192.168.1.1]
  user-data:
    package_upgrade: true
    timezone: Europe/Athens
    ssh_pwauth: true
    users:
      - name: admin
        groups: [adm, sudo]
        lock-passwd: false
        sudo: ALL=(ALL) NOPASSWD:ALL
        shell: /bin/bash
        passwd: '$6$dHMEctI5H$umnFt4HSjB2hRFdR10R3bxSLz/h6BO.tjoAEHFDkO2qLRmZhmaGuw9LKoj5mtinAYNTbankypAobyErj4HaIe/'
  packages:
    - qemu-guest-agent
    - sudo
    - vim
    - zip
    - unzip

As with most Hashicorp products the commands to have packer execute are really simple. Go into the directory that you have the above and run the below commands.

packer init
packer build

If you login into the Proxmox UI, you will see that a VM is spinning and Packer executes stuff :) Unfortunately, Hashicorp deciced to embrace the dark side and change its licensing. I couldn’t find a strong Open Source alternative and decided to keep using it for the moment.

OpenTofu

On the contrary, OpenTofu is a strong OSS alternative for Hashicorp’s Terraform. I don’t feel that I need to give an introduction for OpenTofu/Terraform cause it’s the defacto Infrastructure as Code tool of the latest years.

I am going to use OpenTofu to automatically deploy new VMs based on the template we created in the previous steps. This way I can easily add more nodes to my Kubernetes cluster and be sure that every VM has identical configuration (not in the IP addressing part though).

Below the directory tree for the OpenTofu files.

opentofu/
├── credentials.auto.tfvars
├── dev-vms.tf
├── k8s-vms.tf
├── provider.tf
└── terraform.tfstate

Notice we need the provider configuration, which is similar to Packer’s plugins from above. In this example I used Telmate’s Proxmox provider to communicate with the Proxmox API. Also I configured some variables to be used into TF code. All of these are in provider.tf file.

# Proxmox Provider
# ---
# Initial Provider Configuration for Proxmox

terraform {

  required_version = ">= 0.13.0"

  required_providers {
    proxmox = {
      source  = "telmate/proxmox"
      version = "3.0.1-rc3"
    }
  }
}

provider "proxmox" {

  pm_api_url          = var.proxmox_api_url
  pm_api_token_id     = var.proxmox_api_token_id
  pm_api_token_secret = var.proxmox_api_token_secret
  # Skip TLS Verification
  pm_tls_insecure = true

}

variable "proxmox_api_url" {
  type = string
}

variable "proxmox_api_token_id" {
  type      = string
  sensitive = true
}

variable "proxmox_api_token_secret" {
  type      = string
  sensitive = true
}

variable "vm_username" {
  type = string
}

variable "vm_password" {
  type      = string
  sensitive = true
}

Below is the k8s-vms.tf file which creates one Kubernetes Master node and (at the time of writing) three Worker nodes. Notice some Cloud-init configuration items like ipconfig0 to configure the VM with IP address at boot. Please, change accordingly based on your network setup.

# Proxmox VMs
# ---
# Create VMs cloned from a cloud-init template

resource "proxmox_vm_qemu" "kubernetes-masters" {
  # Create Kubernetes Master nodes
  count = 1

  # VM General Settings
  target_node = "proxmox-01"
  vmid        = "16${count.index}"
  name        = "k8s-master-0${count.index + 1}"
  desc        = "Kubernetes master node ${count.index + 1} \n\n IP `192.168.1.16${count.index}`"
  tags        = "k8s;master" # comma seperated format

  # VM OS Settings
  clone   = "ubuntu-k8s-tmpl-01"
  qemu_os = "other"
  agent   = 1 # Installing agent through cloud-init

  # VM CPU Settings
  sockets = 1
  cores   = 2
  cpu     = "host"

  # VM Memory Settings
  memory = 8192

  # VM Disk Settings
  disks {
    ide {
      ide2 {
        cdrom {
          passthrough = false
        }
      }
      ide3 {
        cloudinit {
          storage = "local-lvm"
        }
      }
    }
    virtio {
      virtio0 {
        disk {
          size    = 30
          storage = "local-lvm"
        }
      }
    }
  }

  # VM Network Settings
  network {
    bridge = "vmbr0"
    model  = "virtio"
  }
  # VM Cloud-Init Settings
  os_type = "cloud-init"

  # Credentials passed through cloud-init
  ciuser     = var.vm_username
  cipassword = var.vm_password

  # IP Address and Gateway (cloud-init)
  ipconfig0 = "ip=192.168.1.16${count.index}/24,gw=192.168.1.1"

  # (Optional) Add your SSH KEY
  # sshkeys = <<EOF
  # #YOUR-PUBLIC-SSH-KEY
  # EOF
}

resource "proxmox_vm_qemu" "kubernetes-workers" {
  # Create Kubernetes Worker nodes
  count = 3

  # VM General Settings
  target_node = "proxmox-01"
  vmid        = "16${count.index + 3}"
  name        = "k8s-worker-0${count.index + 1}"
  desc        = "Kubernetes worker node ${count.index + 1} \n\n IP `192.168.1.16${count.index + 3}`"
  tags        = "k8s;worker" # comma seperated format

  # VM OS Settings
  clone   = "ubuntu-k8s-tmpl-01"
  qemu_os = "other"
  agent   = 1 # Installing agent through cloud-init

  # VM CPU Settings
  sockets = 1
  cores   = 2
  cpu     = "host"

  # VM Memory Settings
  memory = 8192

  # VM Disk Settings
  disks {
    ide {
      ide2 {
        cdrom {
          passthrough = false
        }
      }
      ide3 {
        cloudinit {
          storage = "local-lvm"
        }
      }
    }
    virtio {
      virtio0 {
        disk {
          size    = 60
          storage = "local-lvm"
        }
      }
    }
  }

  # VM Network Settings
  network {
    bridge = "vmbr0"
    model  = "virtio"
  }
  # VM Cloud-Init Settings
  os_type = "cloud-init"

  # Credentials passed through cloud-init
  ciuser     = var.vm_username
  cipassword = var.vm_password

  # IP Address and Gateway (cloud-init)
  ipconfig0 = "ip=192.168.1.16${count.index + 3}/24,gw=192.168.1.1"

  # (Optional) Add your SSH KEY
  # sshkeys = <<EOF
  # #YOUR-PUBLIC-SSH-KEY
  # EOF
}

OpenTofu works similar to Terraform and Packer. So run the below commands, to initiliaze the provider, plan the changes and actually apply them to Proxmox hypervison.

tofu init
tofu plan
tofu apply

After a few minutes you will see your VMs running and hopefully they will have installed everything (eg kubeadm) to bootstrap a Κυβερνήτης cluster.

Summary

In this part of this series, I explored automating the creation of images and virtual machines for Proxmox, resulting in the fast setup of the infra for a small cluster. Now I think we are ready to play around with the interesting stuff k8s. You can explore setting up the actual cluster with different tools like kubeadm or kubespray.

In the next post I think I will deep dive directly into the k8s stuff, taking the cluster setup as granted. ;)