Building Debian 13 Templates with Packer and HashiCorp Vault
A comprehensive guide to creating automated Debian 13 VM templates for Proxmox using Packer with HashiCorp Vault integration for secure credential management
This guide walks you through building fully automated Debian 13 (Trixie) VM templates for Proxmox using HashiCorp Packer with Vault integration for secure credential management. Perfect for engineers looking to implement Infrastructure as Code (IaC) practices in their homelab or production environment.
The complete code for this guide is available in this repo https://github.com/tzalistar/packer-templates.
The packages I’m installing on these templates make sense in my case and I use them as an example. You can modify the templates and add packages that make more sense in your work load.
This guide is part of a series. Check out the companion guide for Building Ubuntu Templates.
Prerequisites
Before you begin, ensure you have the following:
Required Infrastructure
- Proxmox VE cluster (version 7.x or 8.x) with:
- At least one node with sufficient resources
- Storage pool for VM disks (e.g.,
local-lvm,ceph, or NFS) - Storage pool for ISO files
- Network bridge configured
- HashiCorp Vault instance (running and unsealed)
- Access to create/read secrets in KV v2 engine
- Vault token with appropriate permissions
Required Software
- Packer (version 1.9.0 or later)
- Download from https://www.packer.io/downloads
- Vault CLI configured with access to your Vault instance
- Set
VAULT_ADDRandVAULT_TOKENenvironment variables
- Set
Network Requirements
- An available IP address for Packer’s HTTP server (for serving preseed files)
- Network connectivity from Proxmox to this IP
- Internet access for downloading Debian ISO and packages
Overview
This setup creates a production-ready Debian 13 base template with:
- Two user accounts with SSH key and password authentication
- Custom LVM partitioning with dedicated
/var/logpartition - QEMU guest agent for better VM management
- Serial console support for Proxmox web console (xterm.js)
- Cloud-init integration for easy VM customization
- Subscription manager (ATIX) for Foreman/Katello integration
Architecture
The build process follows this workflow:
- Packer retrieves credentials from Vault
- Downloads Debian ISO to Proxmox storage
- Creates a VM and boots from ISO
- Serves preseed configuration via HTTP
- Automates the installation process
- Provisions the system with additional software
- Converts the VM to a template
Step 1: Configure HashiCorp Vault
1.1 Create Vault Secret Structure
Store your Proxmox and user credentials in Vault. This keeps sensitive data out of your code.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Set your Vault address and authenticate
export VAULT_ADDR="https://vault.example.com:8200"
export VAULT_TOKEN="your-vault-token"
# Create the secret path
vault kv put kv/proxmox \
api_url="https://proxmox.example.com:8006/api2/json" \
api_token_id="packer@pve!packer-token" \
api_token_secret="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
default_user="admin" \
default_user_ssh_key="ssh-rsa AAAAB3NzaC1yc2E... user@host" \
default_user_ssh_pass="SecurePassword123!" \
default_user_password_hash='$6$rounds=656000$...' \
ansible_user="automation" \
ansible_user_ssh_key="ssh-rsa AAAAB3NzaC1yc2E... ansible@host" \
ansible_user_ssh_pass="AnsiblePass456!" \
ansible_user_password_hash='$6$rounds=656000$...'
Important: The password hash should be SHA-512 format. Generate it with:
mkpasswd -m sha-512 'your-password'
Why Two Users? The
default_useris for administrative access, whileansible_useris dedicated for automation tools. This separation follows the principle of least privilege and makes it easier to manage access permissions.
1.2 Understanding the Vault Schema
| Key | Purpose | Format |
|---|---|---|
api_url | Proxmox API endpoint | https://HOSTNAME:8006/api2/json |
api_token_id | Proxmox API token identifier | user@realm!token-name |
api_token_secret | Proxmox API token secret | UUID format |
default_user | Primary admin user | Username string |
default_user_ssh_key | SSH public key for admin | Full SSH public key |
default_user_ssh_pass | SSH password (plaintext) | For Packer SSH connection |
default_user_password_hash | Password hash (SHA-512) | For OS user creation |
ansible_user | Automation user | Username string |
ansible_user_ssh_key | Ansible SSH public key | Full SSH public key |
ansible_user_ssh_pass | Ansible SSH password | For automation workflows |
ansible_user_password_hash | Ansible password hash | For OS user creation |
1.3 Create Proxmox API Token
In your Proxmox web interface:
- Navigate to Datacenter → Permissions → API Tokens
- Click Add and create a token:
- User:
packer@pve(or create a dedicated user) - Token ID:
packer-token - Privilege Separation: Unchecked (or grant appropriate permissions)
- User:
- Copy the secret - you won’t be able to see it again!
- Store it in Vault as shown above
Step 2: Prepare the Packer Configuration
2.1 Project Structure
Create the following directory structure:
1
2
3
4
5
6
7
debian-trixie/
├── packer/
│ ├── debian-template.pkr.hcl # Main Packer configuration
│ ├── variables.pkr.hcl # Variable definitions
│ └── http/
│ └── preseed.cfg.pkrtpl # Debian preseed template
└── README.md
2.2 Variables Configuration
Create variables.pkr.hcl:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# Variables for Debian Trixie base template build
variable "proxmox_node" {
type = string
description = "Proxmox node name (e.g., pve1, node01)"
default = "pve-node01"
}
variable "vm_id" {
type = number
description = "VM template ID (null for auto-allocation)"
default = null
}
variable "vm_name" {
type = string
description = "VM template name"
default = "debian-trixie-base"
}
# ISO Configuration
variable "iso_url" {
type = string
description = "Debian 13.2.0 ISO download URL"
default = "https://cdimage.debian.org/cdimage/release/13.2.0/amd64/iso-cd/debian-13.2.0-amd64-netinst.iso"
}
variable "iso_checksum" {
type = string
description = "ISO SHA-512 checksum"
default = "sha512:891d7936a2e21df1d752e5d4c877bb7ca2759c902b0bfbf5527098464623bedaa17260e8bd4acf1331580ae56a6a87a08cc2f497102daa991d5e4e4018fee82b"
}
variable "iso_storage_pool" {
type = string
description = "Proxmox storage for ISOs"
default = "local"
}
variable "storage_pool" {
type = string
description = "Storage pool for VM disks"
default = "local-lvm"
}
# VM Resources
variable "cpu_cores" {
type = number
description = "Number of CPU cores"
default = 2
}
variable "memory" {
type = number
description = "Memory in MB"
default = 2048
}
variable "disk_size" {
type = string
description = "Disk size (minimum 35G)"
default = "35G"
}
# Network Configuration
variable "vlan_tag" {
type = number
description = "VLAN tag (0 for no VLAN)"
default = 0
}
variable "bridge" {
type = string
description = "Network bridge (e.g., vmbr0, vmbr1)"
default = "vmbr0"
}
Key Configuration Points:
proxmox_node: Replace with your actual Proxmox node namevm_id: Set tonullto let Proxmox auto-assign an ID, or specify a fixed number (e.g.,9000)iso_storage_pool: Common values arelocal,nfs-isos, or your custom ISO storagestorage_pool: Where the VM disk will be created (local-lvm,ceph-pool, etc.)bridge: Your network bridge - check withpvesh get /nodes/{node}/networkvlan_tag: Set to0if not using VLANs
2.3 Main Packer Template
The main template file debian-template.pkr.hcl contains:
Required Plugins
1
2
3
4
5
6
7
8
9
10
packer {
required_version = ">= 1.9.0"
required_plugins {
proxmox = {
version = ">= 1.1.8"
source = "github.com/hashicorp/proxmox"
}
}
}
What this does: Ensures Packer version compatibility and loads the Proxmox plugin.
Vault Integration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
locals {
# Retrieve credentials from Vault
proxmox_api_url = vault("kv/data/proxmox", "api_url")
proxmox_api_token_id = vault("kv/data/proxmox", "api_token_id")
proxmox_api_token_secret = vault("kv/data/proxmox", "api_token_secret")
# User configurations
default_user = vault("kv/data/proxmox", "default_user")
default_user_ssh_key = vault("kv/data/proxmox", "default_user_ssh_key")
default_user_ssh_pass = vault("kv/data/proxmox", "default_user_ssh_pass")
default_user_password_hash = vault("kv/data/proxmox", "default_user_password_hash")
ansible_user = vault("kv/data/proxmox", "ansible_user")
ansible_user_ssh_key = vault("kv/data/proxmox", "ansible_user_ssh_key")
ansible_user_ssh_pass = vault("kv/data/proxmox", "ansible_user_ssh_pass")
ansible_user_password_hash = vault("kv/data/proxmox", "ansible_user_password_hash")
# Build timestamp
timestamp = formatdate("YYYY-MM-DD-hhmm", timestamp())
}
What this does: The vault() function reads secrets from your Vault instance at build time. The path format is vault("kv/data/SECRET_PATH", "KEY_NAME").
Why Vault? Storing credentials in Vault instead of plaintext files prevents accidental exposure through version control and provides centralized secret management with audit logging.
Proxmox Source Configuration
1
2
3
4
5
6
7
8
9
10
source "proxmox-iso" "debian-trixie" {
# Proxmox API connection
proxmox_url = local.proxmox_api_url
username = local.proxmox_api_token_id
token = local.proxmox_api_token_secret
insecure_skip_tls_verify = true
node = var.proxmox_node
vm_id = var.vm_id
vm_name = var.vm_name
Key points:
insecure_skip_tls_verify: Set totruefor self-signed certificates. Usefalsein production with valid certs.vm_id: Whennull, Proxmox assigns the next available ID from its pool.
ISO Configuration
1
2
3
4
5
6
7
8
boot_iso {
type = "scsi"
iso_url = var.iso_url
iso_checksum = var.iso_checksum
iso_storage_pool = var.iso_storage_pool
iso_download_pve = true
unmount = true
}
What this does:
iso_download_pve = true: Packer downloads the ISO directly to Proxmox storage (efficient for remote builds)unmount = true: ISO is automatically ejected after installationtype = "scsi": Uses SCSI bus for ISO (better performance than IDE)
Bandwidth Saver: With
iso_download_pve = true, the ISO downloads once to Proxmox and is reused for multiple builds. This is especially useful when running Packer from a remote location.
Hardware Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cores = var.cpu_cores
memory = var.memory
sockets = 1
# UEFI BIOS with Secure Boot disabled
bios = "ovmf"
efi_config {
efi_storage_pool = var.storage_pool
pre_enrolled_keys = false # Disable Secure Boot
}
# Disk configuration
scsi_controller = "virtio-scsi-single"
disks {
disk_size = var.disk_size
storage_pool = var.storage_pool
type = "scsi"
format = "raw"
io_thread = true
discard = true
ssd = true
}
Explanation:
bios = "ovmf": UEFI boot (modern standard, required for Secure Boot compatibility)pre_enrolled_keys = false: Disables Secure Boot (most Linux distributions work better without it)format = "raw": Raw disk format (better performance than qcow2)io_thread = true: Enables I/O threads (significant performance improvement)discard = true: Enables TRIM support for SSDsssd = true: Optimizes I/O scheduler for SSD storage
Raw vs QCOW2: Raw format offers better performance with no overhead, while qcow2 provides thin provisioning and snapshots. For templates, raw is preferred as clones handle thin provisioning.
Network Configuration
1
2
3
4
5
6
7
8
network_adapters {
model = "virtio"
bridge = var.bridge
vlan_tag = var.vlan_tag
firewall = false
}
boot = "order=scsi0;scsi1;net0"
What this does:
model = "virtio": Paravirtualized network driver (best performance)boot = "order=scsi0;scsi1;net0": Boot from disk first, then ISO, then network (PXE)
Boot Command for Debian Preseed
1
2
3
4
5
6
7
8
9
boot_wait = "8s"
boot_command = [
"c<wait5>",
"linux /install.amd/vmlinuz auto=true priority=critical url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg locale=en_US keyboard-configuration/xkb-keymap=us net.ifnames=0 biosdevname=0 interface=auto hostname=debian13 domain=local debian-installer=en_US fb=false debconf/frontend=noninteractive console-setup/ask_detect=false<wait>",
"<enter><wait>",
"initrd /install.amd/gtk/initrd.gz<wait>",
"<enter><wait>",
"boot<enter>"
]
Understanding the boot command:
c<wait5>: Enter GRUB command line mode and wait 5 secondslinux /install.amd/vmlinuz ...: Boot kernel with parameters:auto=true priority=critical: Enable automated installation with high priorityurl=http://...: Fetch preseed from Packer’s HTTP server (variables are interpolated at runtime)net.ifnames=0 biosdevname=0: Use traditional interface names (eth0, eth1) instead of predictable nameshostname=debian13 domain=local: Set temporary hostname during installdebconf/frontend=noninteractive: No interactive prompts
initrd /install.amd/gtk/initrd.gz: Load initramfsboot<enter>: Start the installation
Why Traditional Network Names? Using
net.ifnames=0gives youeth0,eth1instead ofenp0s3. This makes scripts and documentation more portable across different hardware configurations.
HTTP Server for Preseed
1
2
3
4
5
6
7
8
9
10
11
12
13
http_bind_address = "YOUR_IP_ADDRESS" # e.g., "192.168.1.100"
http_port_min = 8100
http_port_max = 8100
http_content = {
"/preseed.cfg" = templatefile("${path.root}/http/preseed.cfg.pkrtpl", {
default_user = local.default_user
default_user_ssh_key = local.default_user_ssh_key
default_user_password = local.default_user_password_hash
ansible_user = local.ansible_user
ansible_user_ssh_key = local.ansible_user_ssh_key
ansible_user_password = local.ansible_user_password_hash
})
}
Critical configuration:
http_bind_address: MUST be an IP address accessible from Proxmox VMs- Find yours with:
ip addr showorhostname -I - Should be on the same network as your Proxmox management interface
- Find yours with:
http_port_min/max: Set to the same port for consistency (ensure it’s not blocked by firewall)
SSH Configuration
1
2
3
4
ssh_username = local.default_user
ssh_password = local.default_user_ssh_pass
ssh_timeout = "20m"
ssh_handshake_attempts = 20
What this does: Packer uses these credentials to connect after OS installation completes. The preseed file creates this user with the password from Vault.
Template Metadata
1
2
3
4
5
6
7
8
9
10
11
12
13
14
template_name = var.vm_name
template_description = <<-EOT
- Debian 13.2.0 (Trixie) Base Template
- Built: ${local.timestamp}
- Users: ${local.default_user}, ${local.ansible_user} (both with sudo NOPASSWD)
- Custom LVM partition schema
- Serial console configured
- Built with: Packer + HashiCorp Vault
EOT
qemu_agent = true
os = "l26" # Linux kernel 2.6+ (generic Linux)
serials = ["socket"] # Enable serial console
tags = "template;debian-trixie;base;packer"
Step 3: Create the Preseed Configuration
The preseed file automates the Debian installation. Create http/preseed.cfg.pkrtpl:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
#### Debian Trixie Preseed Configuration Template
# This file is processed by Packer's templatefile() function
### Localization
d-i debian-installer/locale string en_US.UTF-8
d-i keyboard-configuration/xkb-keymap select us
### Network Configuration
d-i netcfg/choose_interface select auto
d-i netcfg/get_hostname string debian-trixie
d-i netcfg/get_domain string localdomain
d-i netcfg/disable_autoconfig boolean false
### Mirror Settings
d-i mirror/country string manual
d-i mirror/http/hostname string deb.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string
### Account Setup
# Root password disabled (locked account)
d-i passwd/root-login boolean false
# Default user
d-i passwd/user-fullname string ${default_user}
d-i passwd/username string ${default_user}
d-i passwd/user-password-crypted password ${default_user_password}
### Clock and Timezone
d-i clock-setup/utc boolean true
d-i time/zone string UTC
### Partitioning
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
# Custom LVM recipe
d-i partman-auto/expert_recipe string \
boot-root :: \
512 512 512 ext4 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /boot } \
. \
1024 1024 1024 linux-swap \
$lvmok{ } \
method{ swap } format{ } \
. \
10240 10240 10240 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
. \
5120 5120 5120 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var/log } \
. \
2048 2048 2048 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /tmp } \
. \
1024 2048 -1 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var } \
.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
### Package Selection
tasksel tasksel/first multiselect standard, ssh-server
d-i pkgsel/include string openssh-server cloud-init sudo
d-i pkgsel/upgrade select full-upgrade
### GRUB Installation
d-i grub-installer/only_debian boolean true
d-i grub-installer/bootdev string default
### Post-Installation Script
d-i preseed/late_command string \
in-target sh -c 'echo "${default_user} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/${default_user}'; \
in-target chmod 0440 /etc/sudoers.d/${default_user}; \
in-target useradd -m -s /bin/bash ${ansible_user}; \
in-target usermod -aG sudo ${ansible_user}; \
in-target sh -c 'echo "${ansible_user}:${ansible_user_password}" | chpasswd -e'; \
in-target mkdir -p /home/${ansible_user}/.ssh; \
in-target sh -c 'echo "${ansible_user_ssh_key}" > /home/${ansible_user}/.ssh/authorized_keys'; \
in-target chown -R ${ansible_user}:${ansible_user} /home/${ansible_user}/.ssh; \
in-target chmod 700 /home/${ansible_user}/.ssh; \
in-target chmod 600 /home/${ansible_user}/.ssh/authorized_keys; \
in-target sh -c 'echo "${ansible_user} ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/${ansible_user}'; \
in-target chmod 0440 /etc/sudoers.d/${ansible_user}; \
in-target systemctl enable ssh;
### Finish Installation
d-i finish-install/reboot_in_progress note
Key sections explained:
- Partitioning: Creates a custom LVM layout with:
- 512MB
/boot(ext4) - 1GB swap
- 10GB
/(root) - 5GB
/var/log(separate partition for logs) - 2GB
/tmp - Remaining space for
/var
- 512MB
- Post-installation: Creates both users with SSH keys and sudo access
Why Separate /var/log? Isolating logs prevents runaway log files from filling the root partition and crashing the system. This is a best practice for production systems.
Step 4: Build Provisioners
The build block in debian-template.pkr.hcl defines post-installation configuration:
System Updates
1
2
3
4
5
6
7
8
provisioner "shell" {
inline = [
"echo 'Updating system packages...'",
"sudo apt-get update",
"sudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y",
"sudo apt-get install -y apt-transport-https wget curl ca-certificates"
]
}
Install Essential Tools
1
2
3
4
5
6
7
8
9
provisioner "shell" {
inline = [
"sudo apt-get install -y tmux htop btop vim",
"sudo apt-get install -y net-tools dnsutils iputils-ping iproute2",
"sudo apt-get install -y nfs-common",
"sudo apt-get install -y qemu-guest-agent cloud-init",
"sudo systemctl enable qemu-guest-agent"
]
}
Configure System Settings
1
2
3
4
5
provisioner "shell" {
inline = [
"sudo timedatectl set-timezone YOUR_TIMEZONE" # e.g., America/New_York
]
}
Configure Serial Console for Proxmox
1
2
3
4
5
6
7
8
provisioner "shell" {
inline = [
"echo 'Configuring GRUB for serial console...'",
"sudo sed -i '/^GRUB_CMDLINE_LINUX=/d' /etc/default/grub",
"sudo sh -c 'echo \"GRUB_CMDLINE_LINUX=\\\"quiet console=tty0 console=ttyS0,115200\\\"\" >> /etc/default/grub'",
"sudo update-grub"
]
}
What this does: Enables the serial console in Proxmox web UI, allowing you to access the VM console through the browser.
Serial Console Benefits: With serial console configured, you can access the VM’s console directly from the Proxmox web interface without needing VNC or SPICE. This is particularly useful for troubleshooting boot issues.
Template Cleanup
1
2
3
4
5
6
7
8
9
10
11
provisioner "shell" {
inline = [
"sudo apt-get autoremove -y",
"sudo apt-get clean",
"sudo rm -rf /var/lib/apt/lists/*",
"sudo truncate -s 0 /etc/machine-id",
"sudo rm -f /var/lib/dbus/machine-id",
"sudo ln -s /etc/machine-id /var/lib/dbus/machine-id",
"sudo sync"
]
}
Why this matters: Cleaning the machine-id ensures each VM cloned from this template gets a unique ID.
Step 5: Build the Template
5.1 Initialize Packer
1
2
cd debian-trixie/packer
packer init .
This downloads the Proxmox plugin.
5.2 Validate Configuration
1
2
3
4
5
6
7
export VAULT_ADDR="https://vault.example.com:8200"
export VAULT_TOKEN="your-vault-token"
packer validate \
-var="proxmox_node=YOUR_NODE_NAME" \
-var="bridge=vmbr0" \
.
5.3 Build the Template
1
2
3
4
5
packer build \
-var="proxmox_node=pve-node01" \
-var="bridge=vmbr0" \
-var="vlan_tag=0" \
.
Build process:
- ✓ Downloads Debian ISO to Proxmox (first run only)
- ✓ Creates VM with specified ID
- ✓ Boots VM and serves preseed via HTTP
- ✓ Automated installation (10-15 minutes)
- ✓ SSH provisioning (5-10 minutes)
- ✓ Converts VM to template
5.4 Monitor the Build
Watch the build progress:
- Packer output shows each step
- Access the Proxmox console to see installation progress
- Check
/var/log/syslogon the VM if something fails
Step 6: Using the Template
Clone a VM from Template
1
2
3
4
# Via Proxmox CLI
qm clone 9000 100 --name test-vm --full
# Or use the Proxmox web UI: Right-click template → Clone
Customize with Cloud-Init
- In Proxmox UI, select the cloned VM
- Navigate to Cloud-Init tab
- Set:
- User: Your username
- Password: Your password (or leave blank for key-only)
- SSH public key: Your public key
- IP Config: Static IP or DHCP
- Click Regenerate Image
- Start the VM
Troubleshooting
Issue: Packer can’t reach HTTP server
Symptoms: Installation hangs at “Loading preseed…”
Solution:
1
2
3
4
5
6
7
8
# Verify HTTP server is accessible from Proxmox
curl http://YOUR_IP:8100/preseed.cfg
# Check firewall
sudo ufw allow 8100/tcp # If using UFW
# Verify network connectivity
ping YOUR_IP # From Proxmox node
Issue: Vault authentication fails
Symptoms: Error: Failed to read secret from Vault
Solution:
1
2
3
4
5
6
7
8
# Verify Vault connection
vault status
# Check token permissions
vault token lookup
# Verify secret exists
vault kv get kv/proxmox
Issue: SSH timeout during build
Symptoms: Timeout waiting for SSH
Solution:
- Check user was created: Review preseed late_command
- Verify SSH password in Vault matches what preseed uses
- Check if VM got an IP:
qm guest cmd VMID network-get-interfaces
Issue: Template conversion fails
Symptoms: Build completes but template is not created
Solution:
- Check Proxmox user permissions (PVEVMAdmin role)
- Verify storage pool has space
- Check Proxmox logs:
/var/log/pve/tasks/
Advanced Customization
Add Additional Software
Edit the provisioner section to install more packages:
1
2
3
4
5
6
7
provisioner "shell" {
inline = [
"sudo apt-get install -y docker.io",
"sudo systemctl enable docker",
"sudo usermod -aG docker ${local.ansible_user}"
]
}
Customize Partitioning
Modify the preseed partman-auto/expert_recipe to change partition sizes or add new partitions.
Multiple Templates
Create variants by using different variable files:
1
2
3
4
5
# Small template
packer build -var-file="small.pkrvars.hcl" .
# Large template
packer build -var-file="large.pkrvars.hcl" .
Security Best Practices
- Never commit Vault tokens to version control
- Use Vault token with minimal permissions - create a policy:
1 2 3
path "kv/data/proxmox" { capabilities = ["read"] }
- Rotate API tokens regularly in Proxmox
- Use SSH keys instead of passwords where possible
- Disable the template after creation (prevent accidental boot)
Conclusion
You now have a fully automated, reproducible process for creating Debian VM templates. This Infrastructure as Code approach ensures consistency across your environment and makes it easy to rebuild templates with updates.
Next steps:
- Integrate with Ansible for further customization
- Create CI/CD pipeline for automated template updates
- Build Ubuntu templates using similar methodology (see companion guide)
Complete File Reference
Access all files for this guide at: [PLACEHOLDER: GitHub repository URL will be added here]
The repository includes:
- ✓ Complete Packer HCL configuration
- ✓ Preseed template
- ✓ Example variable files
- ✓ Vault policy examples
- ✓ Helper scripts
Have questions or improvements? Feel free to reach out or submit a pull request to the repository!
