Packer Image Building for Servers
Packer is an open-source tool for creating identical machine images for multiple platforms from a single source configuration. Using Packer, you define infrastructure images once in code, build them consistently, and deploy across different cloud providers and hypervisors. This guide covers HCL template syntax, builders for Docker and QEMU, provisioners for software installation, post-processors for image optimization, and CI/CD integration.
Table of Contents
- Packer Overview
- Packer Template Syntax
- AWS Builder Configuration
- Docker Builder
- QEMU Builder
- Provisioners
- Post-Processors
- Building Images
- CI/CD Integration
- Conclusion
Packer Overview
Packer automates the creation of machine images for cloud providers and virtualization platforms. Instead of manually building images through console or scripts, Packer defines image specifications in code, builds them consistently, and enables version control of your infrastructure.
Key benefits:
- Consistency: Build identical images every time
- Speed: Automate image building and boot faster with pre-configured images
- Version Control: Store image definitions in git
- Multi-Cloud: Build once for AWS, Azure, GCP, Docker simultaneously
- Testing: Validate images before deployment
- Reproducibility: Rebuild exactly when needed
Packer workflow:
1. Define template (HCL or JSON)
↓
2. Validate template
↓
3. Build image
├── Create builder environment
├── Run provisioners (install software)
├── Run post-processors (optimize)
└── Save image
↓
4. Deploy image
Packer Template Syntax
Packer uses HCL for readable template definitions.
Basic template structure:
# variables.pkr.hcl
variable "aws_region" {
type = string
default = "us-east-1"
}
variable "instance_type" {
type = string
default = "t3.medium"
}
variable "ami_prefix" {
type = string
default = "packer-example"
}
variable "environment" {
type = string
default = "production"
}
# main.pkr.hcl
source "amazon-ebs" "ubuntu" {
ami_name = "${var.ami_prefix}-${formatdate("YYYY-MM-DD-hhmm", timestamp())}"
instance_type = var.instance_type
region = var.aws_region
source_ami_filter {
filters = {
name = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
owners = ["099720109477"] # Canonical
most_recent = true
}
ssh_username = "ubuntu"
tags = {
Name = "${var.ami_prefix}-image"
Environment = var.environment
BuildTime = timestamp()
}
snapshot_tags = {
Name = "${var.ami_prefix}-snapshot"
}
run_tags = {
Name = "Packer Builder"
}
}
build {
sources = ["source.amazon-ebs.ubuntu"]
provisioner "shell" {
inline = [
"echo 'Building image...'",
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
}
}
Variable definition and types:
# variables.pkr.hcl
variable "subnet_id" {
type = string
description = "Subnet ID for builder instance"
sensitive = true
}
variable "instance_count" {
type = number
description = "Number of instances to build"
default = 1
}
variable "tags" {
type = map(string)
description = "Tags to apply to AMI"
default = {
ManagedBy = "Packer"
}
}
variable "packages" {
type = list(string)
description = "Packages to install"
default = [
"nginx",
"curl",
"wget"
]
}
# Use variables in template
output "ami_id" {
value = aws_ami.ubuntu.id
}
# In template
tags = merge(var.tags, {
Name = "My-Image"
})
AWS Builder Configuration
Build Amazon Machine Images (AMIs) from Packer.
Basic AMI builder:
source "amazon-ebs" "example" {
ami_name = "my-app-${local.timestamp}"
instance_type = "t3.micro"
region = "us-east-1"
source_ami_filter {
filters = {
name = "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"
root-device-type = "ebs"
}
owners = ["099720109477"]
most_recent = true
}
ssh_username = "ubuntu"
}
build {
sources = ["source.amazon-ebs.example"]
provisioner "shell" {
script = "scripts/setup.sh"
}
}
Advanced AMI configuration:
source "amazon-ebs" "production" {
ami_name = "prod-app-${formatdate("YYYY-MM-DD", timestamp())}"
instance_type = var.instance_type
region = var.aws_region
availability_zone = var.availability_zone
subnet_id = var.subnet_id
security_group_id = var.security_group_id
associate_public_ip = false
# Source AMI from custom image
source_ami = var.base_ami_id
# or use filter
source_ami_filter {
filters = {
name = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
state = "available"
}
owners = ["099720109477"]
most_recent = true
}
ssh_username = "ubuntu"
ssh_timeout = "10m"
# EBS configuration
ebs_optimized = true
root_volume_size = 50
encrypt_boot = true
kms_key_id = var.kms_key_id
# Tagging
ami_description = "Production application image"
tags = {
Name = "prod-app"
Environment = "production"
BuildDate = timestamp()
Version = var.image_version
}
snapshot_tags = {
Name = "prod-app-snapshot"
}
run_tags = {
Name = "Packer Builder Instance"
}
# Permissions
ami_users = var.ami_users
ami_groups = ["default"]
snapshot_users = var.snapshot_users
# Cleanup
force_deregister = true
force_delete_snapshot = true
}
build {
sources = ["source.amazon-ebs.production"]
provisioner "file" {
source = "app/"
destination = "/tmp/app"
}
provisioner "shell" {
script = "scripts/install.sh"
}
}
Docker Builder
Create Docker images with Packer.
Basic Docker builder:
source "docker" "ubuntu" {
image = "ubuntu:22.04"
commit = true
changes = [
"ENTRYPOINT /usr/bin/myapp",
"EXPOSE 8080",
"ENV APP_ENV=production"
]
}
build {
sources = ["source.docker.ubuntu"]
provisioner "shell" {
inline = [
"apt-get update",
"apt-get install -y nginx"
]
}
}
Advanced Docker configuration:
source "docker" "application" {
image = "ubuntu:22.04"
commit = true
# Use existing container
docker_name = "my-container"
# Run in Docker
run_command = [
"-d",
"-i",
"-t",
"--",
"{{.Image}}"
]
# Container configuration
container_dir = "/tmp"
# Export to archive
export_path = "image.tar"
# Device access
privileged = false
# Volume mounts
volumes = {
"/tmp" = "/host/tmp"
}
changes = [
"ENV APP_VERSION=${var.app_version}",
"WORKDIR /app",
"EXPOSE 8080 8443",
"ENTRYPOINT [\"/usr/bin/app\"]"
]
}
build {
sources = ["source.docker.application"]
provisioner "shell" {
inline = [
"apt-get update",
"apt-get install -y nginx curl"
]
}
provisioner "file" {
source = "app/"
destination = "/app"
}
post-processor "docker-tag" {
repository = "myrepo/myapp"
tags = ["latest", "v${var.version}"]
}
post-processor "docker-push" {
ecr_login = true
login_server = var.registry_url
aws_access_key = var.aws_access_key
aws_secret_key = var.aws_secret_key
}
}
QEMU Builder
Build images for KVM and QEMU hypervisors.
Basic QEMU builder:
source "qemu" "ubuntu" {
iso_url = "https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso"
iso_checksum = "file:https://releases.ubuntu.com/22.04/SHA256SUMS"
output_directory = "output-qemu"
vm_name = "ubuntu-22.04.qcow2"
accelerator = "kvm"
machine_type = "pc"
cpus = 2
memory = 2048
disk_size = 10000
disk_compression = true
disk_image = false
format = "qcow2"
http_directory = "http"
boot_command = [
"<tab>",
"ip=dhcp ipv6.disable=1",
" ds=nocloud-net;s=http://{{.HTTPIP}}:{{.HTTPPort}}/",
"<enter>"
]
boot_wait = "2s"
headless = true
shutdown_command = "echo 'packer' | sudo -S shutdown -P now"
ssh_username = "ubuntu"
ssh_password = "ubuntu"
ssh_wait_timeout = "20m"
}
build {
sources = ["source.qemu.ubuntu"]
provisioner "shell" {
inline = [
"echo 'Building QEMU image'",
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
}
}
Provisioners
Provisioners run software installation and configuration.
Shell provisioner:
provisioner "shell" {
# Inline commands
inline = [
"apt-get update",
"apt-get install -y nginx curl wget",
"systemctl enable nginx"
]
}
# Or script file
provisioner "shell" {
script = "scripts/install-packages.sh"
}
# Multiple scripts
provisioner "shell" {
scripts = [
"scripts/setup.sh",
"scripts/security.sh",
"scripts/cleanup.sh"
]
}
# With environment variables
provisioner "shell" {
environment_vars = [
"APP_VERSION=1.0.0",
"ENVIRONMENT=production"
]
inline = [
"echo Version is $APP_VERSION"
]
}
File provisioner:
# Copy file
provisioner "file" {
source = "app/config.yml"
destination = "/tmp/config.yml"
}
# Copy directory
provisioner "file" {
source = "app/"
destination = "/opt/app"
}
# Copy to local (from builder)
provisioner "file" {
type = "ssh"
source = "/opt/app/output.txt"
destination = "local/output.txt"
direction = "download"
}
Ansible provisioner:
provisioner "ansible" {
playbook_file = "playbooks/main.yml"
extra_arguments = [
"-e", "environment=production",
"-e", "app_version=${var.app_version}"
]
ansible_env_vars = [
"ANSIBLE_HOST_KEY_CHECKING=False"
]
}
# With inventory
provisioner "ansible" {
playbook_file = "playbooks/main.yml"
user = "ubuntu"
local_port = 22
host_alias = "packer-instance"
}
Chef provisioner:
provisioner "chef-client" {
chef_version = "16.6.14"
run_list = [
"recipe[base::default]",
"recipe[nginx::default]"
]
cookbook_paths = ["cookbooks"]
server_url = var.chef_server_url
user_id = var.chef_user_id
key = file("keys/client.pem")
}
Post-Processors
Post-processors optimize and distribute images.
Docker tag and push:
post-processor "docker-tag" {
repository = "myregistry.azurecr.io/myapp"
tags = ["latest", "v${var.version}"]
}
post-processor "docker-push" {
ecr_login = true
login_server = var.registry_url
aws_access_key = var.aws_access_key
aws_secret_key = var.aws_secret_key
}
Manifest for artifact tracking:
post-processor "manifest" {
output = "manifest.json"
strip_path = true
custom_data = {
build_time = timestamp()
build_version = var.image_version
source_ami = data.aws_ami.ubuntu.id
}
}
Vagrant box:
post-processor "vagrant" {
output = "output/ubuntu-{{user `version`}}.box"
only = ["source.amazon-ebs.ubuntu"]
}
post-processor "vagrant-cloud" {
access_token = var.vagrant_cloud_token
box_checksum = file("checksums.txt")
box_checksum_type = "sha256"
box_tag = "myorg/ubuntu"
version = var.version
version_description = "Ubuntu 22.04 with Nginx"
}
Manifest with artifact info:
build {
sources = [
"source.amazon-ebs.ubuntu",
"source.docker.ubuntu"
]
# ... provisioners ...
post-processor "manifest" {
output = "manifest.json"
custom_data = {
image_version = var.image_version
build_date = timestamp()
git_commit = var.git_commit
}
}
}
Building Images
Build and test Packer images.
Validate template:
# Validate syntax and configuration
packer validate .
# Validate specific file
packer validate main.pkr.hcl
# With variable files
packer validate -var-file=prod.pkrvars.hcl .
Format code:
# Format HCL files
packer fmt .
# Check formatting
packer fmt -check .
Initialize template:
# Download required plugins
packer init .
# Upgrade plugins
packer init -upgrade .
Build image:
# Build using defaults
packer build .
# Build specific sources
packer build -only='amazon-ebs.ubuntu' .
# With variable overrides
packer build \
-var "aws_region=us-west-2" \
-var "instance_type=t3.small" \
.
# With variable file
packer build -var-file=prod.pkrvars.hcl .
# Debug mode (keep builder instance running)
packer build -debug .
# Force build (remove existing)
packer build -force .
# Show build output
packer inspect main.pkr.hcl
Inspect output:
# After build, check manifest
cat manifest.json | jq .
# Output shows:
# {
# "builds": [
# {
# "name": "amazon-ebs.ubuntu",
# "builder_type": "amazon-ebs",
# "build_time": 1234567890,
# "files": [],
# "artifact_id": "us-east-1:ami-0123456789abcdef0",
# "artifact_file_id": "AMIid"
# }
# ],
# "last_run_uuid": "abc123..."
# }
CI/CD Integration
Automate image builds in CI/CD pipelines.
GitHub Actions:
name: Build Packer Image
on:
push:
branches: [main]
paths:
- 'packer/**'
- '.github/workflows/packer.yml'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: hashicorp/setup-packer@main
- name: Validate
run: packer validate packer/
- name: Format check
run: packer fmt -check packer/
- name: Build image
run: packer build -var-file=packer/prod.pkrvars.hcl packer/
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
PACKER_LOG: 1
- name: Upload manifest
uses: actions/upload-artifact@v3
with:
name: packer-manifest
path: manifest.json
GitLab CI:
stages:
- validate
- build
variables:
PACKER_VERSION: "1.8.0"
before_script:
- apt-get update && apt-get install -y wget unzip
- wget https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_amd64.zip
- unzip packer_${PACKER_VERSION}_linux_amd64.zip
validate:
stage: validate
script:
- packer validate packer/
- packer fmt -check packer/
build_ami:
stage: build
script:
- packer build -var-file=packer/prod.pkrvars.hcl packer/
artifacts:
paths:
- manifest.json
only:
- main
Jenkins:
pipeline {
agent any
environment {
AWS_REGION = 'us-east-1'
AWS_CREDENTIALS = credentials('aws-credentials')
}
stages {
stage('Validate') {
steps {
sh 'packer validate packer/'
}
}
stage('Build') {
steps {
sh '''
packer build \
-var-file=packer/prod.pkrvars.hcl \
packer/
'''
}
}
stage('Archive') {
steps {
archiveArtifacts artifacts: 'manifest.json'
}
}
}
}
Conclusion
Packer standardizes image creation across platforms, enabling reproducible infrastructure deployments. By defining images as code, validating templates, and integrating with CI/CD pipelines, you create a foundation for consistent, rapidly deployable infrastructure. Combine Packer-built images with Terraform for Infrastructure-as-Code deployments that are reliable, auditable, and scalable.


