| sdp | ||
| test | ||
| test-cached | ||
| .gitignore | ||
| bring-down.sh | ||
| bring-up.sh | ||
| build.sh | ||
| cache.png | ||
| copy-log.sh | ||
| image-info.conf | ||
| LICENSE.md | ||
| README.md | ||
| update.sh | ||
cloud-init VM Test Harness
This repository provides a small, explicit VM-based test harness built on libvirt, QEMU, and cloud-init. It is designed for repeatable, disposable test environments without requiring on virt-manager or any GUI tooling.
The primary use case is testing system-level software in clean environments. The original motivation was a Go-based replacement for systemd-resolved implementing secure DNS transports (DoH, DoT, DoQ), but the tooling is generic.
Design Goals
- Disposable test VMs
- No GUI dependencies
- No virt-manager required
- Reproducible builds from a known base image
- Minimal implicit behavior
- Fast iteration
- CLI-first workflow
Tested host systems
(using included test directory, using the Debian's generic cloud images)
- OpenSUSE Tumbleweed Linux ✅
- Asahi Fedora Linux ✅
It is recommended to use the included test directory to ensure the system works as intended. If it works, the VM will show "SUCCESS" somewhere in the logs with lines of # above and below it as such:
######################################################
SUCCESS
######################################################
High-Level Workflow
- Maintain a read-only base qcow2 image
- Create qcow2 overlays for each test run
- Inject configuration via cloud-init (NoCloud) ISO
- Boot using UEFI (required)
- Run tests or setup logic
- Destroy the VM and all per-run artifacts
Each run starts from a clean environment.
Directory Layout
.
├── image-info.conf # Base image configuration (single source of truth)
├── update.sh # Update base cloud image
├── bring-up.sh # Build cloud-init, create overlay, boot VM
├── bring-down.sh # Tear down VM and clean up artifacts
├── build.sh # Build cloud-init ISO from a directory
├── LICENSE.md
├── README.md
├── vm1/
│ ├── user-data
│ └── meta-data
├── vm2/
│ ├── user-data
│ ├── meta-data
│ └── include/ # used to include files in the generated iso's root. May be mounted by label `cidata` via cloud-init for access.
│ └── example.sh
└── vm3/
├── user-data
└── meta-data
All scripts resolve paths relative to their own location, not $HOME. This makes execution predictable under sudo, cron, or automation.
Prerequisites
- libvirt
- QEMU/KVM
- virt-install
- xorriso (for ISO creation)
- cloud-init inside the guest image
The host must have hardware virtualization enabled.
Base Image Configuration
Base image and VM defaults are defined in image-info.conf:
BASE_URL="https://cloud.debian.org/images/cloud/trixie/latest"
IMAGE="debian-13-genericcloud-amd64.qcow2"
VM_MEMORY=2048
VM_VCPUS=2
VM_OS_VARIANT="debian13"
# default NAT config
VM_NET_MODE="libvirt"
VM_NETWORK="default"
# macvtap config
#VM_NET_MODE="macvtap"
#VM_NET_IFACE="eno1"
# dual NIC config
#VM_NET_MODE="dual"
#VM_NET_IFACE="eno1"
#VM_NETWORK="default"
Key points:
- The resolved base image path is
/var/lib/libvirt/ro-images/<IMAGE>and is treated as read-only. Per-run VMs use qcow2 overlays backed by it. VM_MEMORY,VM_VCPUS, andVM_OS_VARIANTcontrol the default hardware profile for new VMs.VM_NET_MODEselects betweenlibvirt(NAT),macvtap(bridged), ordual(one of each).VM_NET_IFACEis required for the bridged interface, andVM_NETWORKchooses the libvirt network when applicable.- After updating the base image, all existing overlays created from the previous version must be deleted.
Scripts
update.sh
Optional helper to fetch and update the base cloud image.
Responsibilities:
- Fetch
SHA512SUMSfrom the configuredBASE_URL - Compare against the locally stored checksum
- Download the qcow2 image only when it has changed
Usage:
./update.sh
This script owns base image lifecycle only and does not interact with VMs.
build.sh
Builds a NoCloud cloud-init ISO from a directory containing user-data and meta-data.
Expected structure:
<name>/
├── user-data
└── meta-data
Usage:
./build.sh <name>
Output:
/var/lib/libvirt/cloud-init/<name>.iso
This script only generates the ISO and performs no VM operations.
bring-up.sh
Creates and boots a disposable VM.
Steps performed:
- Builds the cloud-init ISO using
build.sh - Destroys and undefines any existing VM with the same name
- Creates a qcow2 overlay backed by the configured base image
- Boots the VM using
virt-install - Attaches to the serial console
Usage:
./bring-up.sh <name>
UEFI is explicitly enabled to match virt-manager behavior and avoid BIOS-related boot failures.
bring-down.sh
Destroys and cleans up a VM and all associated artifacts.
Typical actions:
- Destroy the running VM (if present)
- Undefine the domain
- Remove the qcow2 overlay
- Remove the cloud-init ISO
Usage:
./bring-down.sh <name>
This leaves the system in a clean state.
cloud-init Usage
cloud-init is used in NoCloud mode via an attached ISO.
-
user-datacontains configuration, packages, scripts, or test logic -
meta-datadefines instance identity -
Your VM directory may have a
include/directory. It will put any files inside it in the root of the iso in order to allow the VM to access additional files. For example, you may useinclude/test.shthenmount --mkdir -L cidata /mnt/cidatafollowed by/mnt/cidata/test.shto call it.
cloud-init runs once on first boot. The VM can be configured to power off automatically after completing tests.
Logging
Included is a copy-logs.sh script. It's syntax is ./copy-logs.sh vm-name [/path/to/file]. /path/to/file here is a path on the VM disk that leads to a particular file to copy. Otherwise, if no path is supplied then it will pull the cloud init logs.
NOTE: You MUST shutdown the VM prior to using the script. NOT bring down the VM with bring-down.sh as that will delete the qcow2 overlay that contains the logs. You may use sudo virsh shutdown vm-name to shutdown a VM from the command line.
Notes and Pitfalls
- UEFI is mandatory for modern Debian cloud images when using
virt-install - qcow2 overlays must not outlive their backing image
- This harness intentionally avoids persistent state
Philosophy
This project favors:
- explicit configuration over hidden defaults
- disposable environments over mutable systems
- reproducibility over convenience
If something breaks, the solution is usually to delete it and start again.
Apt & HTTP caching
Because this harness relies on disposable VMs, the same packages and metadata are downloaded repeatedly across test runs. This is wasteful and quickly becomes a bottleneck, even on fast connections.
For this reason, using a local caching proxy is strongly recommended. While a generic HTTP proxy like Squid can work, a purpose-built package cache such as apt-cacher-ng is a significantly better fit for this workflow. It transparently caches .deb files and apt metadata.
In practice, apt-cacher-ng dramatically reduces network usage and speeds up repeated test runs when using this harness.
See the difference (apt-cacher-ng in use)
The following screenshot shows apt-cacher-ng statistics after repeated runs of this harness, illustrating the reduction in external network traffic once a cache is in place.
If you use Docker, you can deploy it with the following Compose file:
Docker configuration
services:
apt-cacher-ng:
image: sameersbn/apt-cacher-ng
container_name: apt-cacher-ng
ports:
- "3142:3142"
volumes:
- apt-cacher-ng:/var/cache/apt-cacher-ng
restart: always
volumes:
apt-cacher-ng:
This exposes the service on the Docker host’s IP address. When running under libvirt’s default NAT network, this is typically 192.168.122.1. See test-cached for an example of how to consume the cache.
Proof of concept
I have included my test scripts for secure-dns-proxy in sdp/ of the repo. These scripts use my secure DNS servers but the files are all plaintext and under the same license so you may edit them as you wish.
License
See LICENSE.md.
