Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 65 additions & 0 deletions .github/workflows/example-charm-charmcraft-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
name: Example Charm charmcraft test

on:
Comment thread
dwilding marked this conversation as resolved.
workflow_dispatch:
schedule:
- cron: '50 16 * * 2' # 16:50 UTC Tuesdays

permissions: {}

jobs:
# Discover one spread task per Python integration test module across every
# example charm. Each `spread/integration/<module>/task.yaml` becomes a
# separate matrix entry, so adding a new task.yaml automatically adds a job.
collect-spread-jobs:
runs-on: ubuntu-latest
outputs:
jobs: ${{ steps.collect.outputs.jobs }}
steps:
- uses: actions/checkout@v6
with:
persist-credentials: false
- name: Collect spread jobs
id: collect
run: |
jobs=$(find examples -path 'examples/*/spread/integration/*/task.yaml' -print \
| sort \
| jq -Rnc '[inputs | capture("examples/(?<charm>[^/]+)/spread/integration/(?<task>[^/]+)/task\\.yaml")]')
echo "jobs=${jobs}" >> "$GITHUB_OUTPUT"
echo "${jobs}" | jq .

integration:
name: ${{ matrix.charm }} / ${{ matrix.task }}
needs: collect-spread-jobs
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.collect-spread-jobs.outputs.jobs) }}
steps:
- uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up LXD
# `charmcraft test` packs the charm in a managed LXD VM. The Docker
# preinstalled on GHA runners drops LXD bridge traffic in the FORWARD
# chain; canonical/setup-lxd adds the LXD bridge to DOCKER-USER so the
# build VM has network access. It also sets the lxd daemon group to
# `adm` (already the runner user's group), so no `sg` wrapper is needed.
uses: canonical/setup-lxd@8c6a87bfb56aa48f3fb9b830baa18562d8bfd4ee # v1
with:
channel: 5.21/stable
- name: Install charmcraft
run: sudo snap install charmcraft --classic
- name: Fetch any charmlibs
working-directory: examples/${{ matrix.charm }}
run: |
if grep -Eq "^charm-libs:" charmcraft.yaml; then
charmcraft fetch-libs
fi
- name: Run spread test
working-directory: examples/${{ matrix.charm }}
# GitHub Actions sets CI=true, so charmcraft test uses the `ci`
# allocator (runs against the runner itself) instead of launching
# a nested LXD VM for the spread target.
run: charmcraft test "craft:ubuntu-24.04:spread/integration/${{ matrix.task }}"
2 changes: 1 addition & 1 deletion .github/workflows/example-charm-integration-tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
fail-fast: false
matrix:
dir:
- examples/httpbin-demo
# httpbin-demo runs under example-charm-charmcraft-test.yaml instead.
- examples/k8s-1-minimal
- examples/k8s-2-configurable
- examples/k8s-3-postgresql
Expand Down
39 changes: 39 additions & 0 deletions docs/howto/set-up-continuous-integration-for-a-charm.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,3 +111,42 @@ The option `-p k8s` tells Concierge that we want a cloud managed by Canonical Ku
If your charm is a machine charm, use `-p machine` instead.

The "Upload logs" step assumes that your integration tests use Jubilant together with `pytest-jubilant`. See [How to write integration tests for a charm](#write-integration-tests-for-a-charm-view-juju-logs).

This single job runs every integration test module sequentially. As your suite grows, split tests across modules and run each module in its own CI job — see {ref}`write-integration-tests-for-a-charm-split-across-modules`.

(set-up-ci-charmcraft-test)=
## Run integration tests in parallel with `charmcraft test`

If you initialised your charm with `charmcraft init --profile test-machine` or `--profile test-kubernetes` (both currently experimental), your charm includes a `spread.yaml` and one `spread/integration/<module>/task.yaml` per test module. You can use `charmcraft test` in CI to run each module as its own matrix job, so total wall-clock time is bounded by the slowest module rather than the sum of all modules. Adding a new `test_*.py` module — along with its `task.yaml` — automatically adds a new CI job.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite dense. How about expanding it (visually) and with a bit more context on spread:

Suggested change
If you initialised your charm with `charmcraft init --profile test-machine` or `--profile test-kubernetes` (both currently experimental), your charm includes a `spread.yaml` and one `spread/integration/<module>/task.yaml` per test module. You can use `charmcraft test` in CI to run each module as its own matrix job, so total wall-clock time is bounded by the slowest module rather than the sum of all modules. Adding a new `test_*.py` module — along with its `task.yaml` — automatically adds a new CI job.
If you initialised your charm with `charmcraft init --profile test-machine` or `--profile test-kubernetes` (both currently experimental), your charm includes extra testing machinery:
- A [spread](https://github.com/canonical/spread) configuration file called `spread.yaml`.
- One file `spread/integration/<module>/task.yaml` per test module.
You can use `charmcraft test` in CI to run each module as its own matrix job, so total wall-clock time is bounded by the slowest module rather than the sum of all modules. Adding a new `test_*.py` module — along with its `task.yaml` — automatically adds a new CI job.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also consider moving the sentence about automatically adding CI jobs. I'll make a separate suggestion about that.


A minimal workflow looks like:

```yaml
integration:
name: Integration / ${{ matrix.task }}
runs-on: ubuntu-latest
needs:
- unit
strategy:
fail-fast: false
matrix:
task:
- test_charm
# Add one entry per spread/integration/<module>/task.yaml.
steps:
- uses: actions/checkout@v6
with:
persist-credentials: false
- name: Set up LXD
uses: canonical/setup-lxd@8c6a87bfb56aa48f3fb9b830baa18562d8bfd4ee # v1
with:
channel: 5.21/stable
- name: Install charmcraft
run: sudo snap install charmcraft --classic
- name: Run spread test
# On GitHub Actions (CI=true) charmcraft test runs spread against the
# runner itself, instead of launching a nested LXD VM.
run: charmcraft test "craft:ubuntu-24.04:spread/integration/${{ matrix.task }}"
```

For a complete workflow that discovers modules dynamically (no hard-coded matrix), see the Ops repository's [example-charm-charmcraft-test.yaml](https://github.com/canonical/operator/blob/main/.github/workflows/example-charm-charmcraft-test.yaml). For the matching charm-side files, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the example in the doc doesn't show how to discover modules, I'd move the earlier sentence here. Something like this:

Suggested change
For a complete workflow that discovers modules dynamically (no hard-coded matrix), see the Ops repository's [example-charm-charmcraft-test.yaml](https://github.com/canonical/operator/blob/main/.github/workflows/example-charm-charmcraft-test.yaml). For the matching charm-side files, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm.
It's also possible to discover modules dynamically (no hard-coded matrix), so that when you add a `test_*.py` module and corresponding `task.yaml` file, you automatically get a new CI job.
For an example, see the Ops repository's [example-charm-charmcraft-test.yaml workflow](https://github.com/canonical/operator/blob/main/.github/workflows/example-charm-charmcraft-test.yaml). This workflow runs integration tests for the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm.

20 changes: 20 additions & 0 deletions docs/howto/write-integration-tests-for-a-charm.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,26 @@ By convention, integration tests are kept in the charm's source tree, in a direc

If you initialised the charm with `charmcraft init`, your charm directory should already contain a `tests/integration/test_charm.py` file. Otherwise, manually create this directory structure and a test file. You can call the test file anything you like, as long as the name starts with `test_`.

(write-integration-tests-for-a-charm-split-across-modules)=
### Split tests across modules

As your suite grows, split your integration tests across several `test_*.py` modules, grouped by feature. Tests within a module share a single `juju` fixture (and therefore a single Juju model), so keep tests that need to build on each other together. Across modules, each gets a fresh model, which makes modules independent of each other.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As your suite grows, split your integration tests across several `test_*.py` modules, grouped by feature. Tests within a module share a single `juju` fixture (and therefore a single Juju model), so keep tests that need to build on each other together. Across modules, each gets a fresh model, which makes modules independent of each other.
As your suite grows, split your integration tests across several `test_*.py` modules, grouped by feature. Tests within a module share a single `juju` fixture (and therefore a single Juju model), so keep tests that need to build on each other together. Each module gets a fresh model, which makes modules independent of each other.


A common split:

- `test_charm.py` — smoke tests: pack, deploy, and check the charm reaches active status.
- `test_<feature>.py` — one module per feature area (for example, `test_backups.py`, `test_tls.py`, `test_upgrade.py`, `test_scaling.py`).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the meaning is clear enough from the examples.

Suggested change
- `test_<feature>.py`one module per feature area (for example, `test_backups.py`, `test_tls.py`, `test_upgrade.py`, `test_scaling.py`).
- `test_<feature>.py` — for example, `test_backups.py`, `test_tls.py`, `test_upgrade.py`, `test_scaling.py`.


The main reason to split is **parallel CI execution**: each module can run as a separate job, so total wall-clock time is governed by the slowest module rather than the sum of all modules. `tox -e integration` still runs every module sequentially on a single machine; it's the CI matrix (see {ref}`set-up-ci-integration`) that turns module boundaries into parallel jobs. Adding a new `test_*.py` file then automatically adds a new CI job — no workflow changes needed.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this a bit confusing. The first example in our CI doc doesn't have a matrix, so new jobs wouldn't automatically be created for each test_*.py file, right?

My interpretation is more like this: Use separate modules so that you have the option of using a matrix in CI. If you're going to do that, we recommend using the charmcraft test approach described in the CI doc.


For real-world examples of module-per-feature splits, see:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be easier to read.

Suggested change
For real-world examples of module-per-feature splits, see:
For real-world examples of split tests, see:


- [postgresql-operator](https://github.com/canonical/postgresql-operator/tree/main/tests/integration) — machine charm, split across many feature modules (backups, TLS, upgrades, HA, and so on).
- [postgresql-k8s-operator](https://github.com/canonical/postgresql-k8s-operator/tree/main/tests/integration) — the Kubernetes equivalent.
- [opensearch-operator](https://github.com/canonical/opensearch-operator/tree/main/tests/integration) — larger suite with per-cloud backup modules.

For a minimal scaffold that you can lift into your own charm, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm in the Ops repository.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personal preference perhaps 🙂

Suggested change
For a minimal scaffold that you can lift into your own charm, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm in the Ops repository.
For a minimal scaffold that you can use in your own charm, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm in the Ops repository.


### Deploy your charm

Add this test in your integration test file:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -416,6 +416,12 @@ You can ensure this by writing integration tests for your charm. In the charming

In this section we'll write a small integration test to check that the charm packs and deploys correctly.

```{tip}
Charmcraft can also scaffold a spread configuration that runs your integration tests under `charmcraft test`. From your project directory, run `charmcraft init --profile test-kubernetes --force` to drop in the extra files. This profile is currently experimental. You can see a worked example, with a few extra optimisations, in the [httpbin-demo charm](https://github.com/canonical/operator/tree/main/examples/httpbin-demo).

`charmcraft test` wraps [spread](https://github.com/canonical/spread) to provision a clean environment for each run: it packs the charm, launches an LXD VM (or configures a CI runner), uses [Concierge](https://github.com/canonical/concierge) to bootstrap Juju and the cloud substrate, then invokes your pytest integration tests inside it. Each `tests/integration/test_*.py` module becomes its own spread job, so CI can fan them out as a parallel matrix — adding a new test module automatically adds a new job.
```

### Write a test

Let's write the simplest possible integration test, a [smoke test](https://en.wikipedia.org/wiki/Smoke_testing_(software)). This test will deploy the charm, then verify that the installation event is handled without errors.
Expand Down
34 changes: 34 additions & 0 deletions examples/httpbin-demo/spread.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
project: httpbin-demo

backends:
craft:
type: craft
systems:
- ubuntu-24.04:

prepare: |
# Juju needs the charm etc. to be owned by the running user.
chown -R "${USER}" "${PROJECT_PATH}"

suites:
spread/integration/:
summary: Integration tests

environment:
# `uv tool install` places binaries under the invoking user's ~/.local/bin.
PATH: $PATH:/root/.local/bin

prepare: |
sudo snap install --classic concierge
sudo concierge prepare --trace -p k8s --extra-snaps astral-uv
uv tool install tox --with tox-uv

exclude:
- .git
- .tox
- .venv
- .ruff_cache
- .pytest_cache
- .coverage

kill-timeout: 90m
195 changes: 195 additions & 0 deletions examples/httpbin-demo/spread/.extension
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
#!/bin/bash

usage() {
echo "usage: $(basename "$0") [command]"
echo "valid commands:"
echo " allocate Create a backend instance to run tests on"
echo " discard Destroy a backend instance used to run tests"
echo " backend-prepare Set up the system to run tests"
echo " backend-restore Restore the system after the tests ran"
echo " backend-prepare-each Prepare the system before each test"
echo " backend-restore-each Restore the system after each test run"
}

prepare() {
case "$SPREAD_SYSTEM" in
fedora*)
dnf update -y
dnf install -y snapd
while ! snap install snapd; do
echo "waiting for snapd..."
sleep 2
done
;;
debian*)
apt update
apt install -y snapd
while ! snap install snapd; do
echo "waiting for snapd..."
sleep 2
done
;;
ubuntu*)
apt update
;;
esac

snap wait system seed.loaded
snap refresh --hold

if systemctl is-enabled unattended-upgrades.service; then
systemctl stop unattended-upgrades.service
systemctl mask unattended-upgrades.service
fi
}

restore() {
case "$SPREAD_SYSTEM" in
ubuntu* | debian*)
apt autoremove -y --purge
;;
esac

rm -Rf "$PROJECT_PATH"
mkdir -p "$PROJECT_PATH"
}

prepare_each() {
true
}

restore_each() {
true
}

allocate_lxdvm() {
name=$(echo "$SPREAD_SYSTEM" | tr '[:punct:]' -)
system=$(echo "$SPREAD_SYSTEM" | tr / -)
if [[ "$system" =~ ^ubuntu- ]]; then
image="ubuntu:${system#ubuntu-}"
else
image="images:$(echo "$system" | tr - /)"
fi

VM_NAME="${VM_NAME:-spread-${name}-${RANDOM}}"
DISK="${DISK:-40}"
CPU="${CPU:-4}"
MEM="${MEM:-12}"

lxc launch --vm \
"${image}" \
"${VM_NAME}" \
-c limits.cpu="${CPU}" \
-c limits.memory="${MEM}GiB" \
-d root,size="${DISK}GiB"

while ! lxc exec "${VM_NAME}" -- true &>/dev/null; do sleep 0.5; done
lxc exec "${VM_NAME}" -- sed -i 's/^\s*#\?\s*\(PermitRootLogin\|PasswordAuthentication\)\>.*/\1 yes/' /etc/ssh/sshd_config
lxc exec "${VM_NAME}" -- bash -c "if [ -d /etc/ssh/sshd_config.d ]; then echo -e 'PermitRootLogin yes\nPasswordAuthentication yes' > /etc/ssh/sshd_config.d/00-spread.conf; fi"
lxc exec "${VM_NAME}" -- bash -c "echo root:${SPREAD_PASSWORD} | sudo chpasswd || true"

# Print the instance address to stdout
ADDR=""
while [ -z "$ADDR" ]; do ADDR=$(lxc ls -f csv | grep "^${VM_NAME}" | cut -d"," -f3 | cut -d" " -f1); done
echo "$ADDR" 1>&3
}

discard_lxdvm() {
instance_name="$(lxc ls -f csv | sed ':a;N;$!ba;s/(docker0)\n/(docker0) /' | grep "$SPREAD_SYSTEM_ADDRESS " | cut -f1 -d",")"
lxc delete -f "$instance_name"
}

allocate_ci() {
if [ -z "$CI" ]; then
echo "This backend is intended to be used only in CI systems."
exit 1
fi
sudo sed -i 's/^\s*#\?\s*\(PermitRootLogin\|PasswordAuthentication\)\>.*/\1 yes/' /etc/ssh/sshd_config
if [ -d /etc/ssh/sshd_config.d ]; then echo -e 'PermitRootLogin yes\nPasswordAuthentication yes' | sudo tee /etc/ssh/sshd_config.d/00-spread.conf; fi
sudo systemctl daemon-reload
sudo systemctl restart ssh

echo "root:${SPREAD_PASSWORD}" | sudo chpasswd || true

# Print the instance address to stdout
echo localhost >&3
}

discard_ci() {
true
}

allocate() {
exec 3>&1
exec 1>&2

case "$1" in
lxd-vm)
allocate_lxdvm
;;
ci)
allocate_ci
;;
*)
echo "unsupported backend $1" 2>&1
;;
esac
}

discard() {
case "$1" in
lxd-vm)
discard_lxdvm
;;
ci)
discard_ci
;;
*)
echo "unsupported backend $1" 2>&1
;;
esac
}

set -e

while getopts "" o; do
case "${o}" in
*)
usage
exit 1
;;
esac
done
shift $((OPTIND - 1))

CMD="$1"
PARM="$2"

if [ -z "$CMD" ]; then
usage
exit 0
fi

case "$CMD" in
allocate)
allocate "$PARM"
;;
discard)
discard "$PARM"
;;
backend-prepare)
prepare
;;
backend-restore)
restore
;;
backend-prepare-each)
prepare_each
;;
backend-restore-each)
restore_each
;;
*)
echo "unknown command $CMD" >&2
;;
esac
5 changes: 5 additions & 0 deletions examples/httpbin-demo/spread/integration/test_charm/task.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
summary: Run test_charm integration tests

execute: |
cd "${SPREAD_PATH}"
CHARM_PATH="${CRAFT_ARTIFACT}" tox run -e integration -- tests/integration/test_charm.py