-
Notifications
You must be signed in to change notification settings - Fork 130
ci: use charmcraft test for example charm #2440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
b448397
73d6900
a44231a
c7892ee
63abbcb
65bb211
982b514
d635c93
957753f
3f52ce9
6d2708d
dad4507
41585b1
11ed8ef
8327ea8
77af250
3099c4c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,65 @@ | ||
| name: Example Charm charmcraft test | ||
|
|
||
| on: | ||
| workflow_dispatch: | ||
| schedule: | ||
| - cron: '50 16 * * 2' # 16:50 UTC Tuesdays | ||
|
|
||
| permissions: {} | ||
|
|
||
| jobs: | ||
| # Discover one spread task per Python integration test module across every | ||
| # example charm. Each `spread/integration/<module>/task.yaml` becomes a | ||
| # separate matrix entry, so adding a new task.yaml automatically adds a job. | ||
| collect-spread-jobs: | ||
| runs-on: ubuntu-latest | ||
| outputs: | ||
| jobs: ${{ steps.collect.outputs.jobs }} | ||
| steps: | ||
| - uses: actions/checkout@v6 | ||
| with: | ||
| persist-credentials: false | ||
| - name: Collect spread jobs | ||
| id: collect | ||
| run: | | ||
| jobs=$(find examples -path 'examples/*/spread/integration/*/task.yaml' -print \ | ||
| | sort \ | ||
| | jq -Rnc '[inputs | capture("examples/(?<charm>[^/]+)/spread/integration/(?<task>[^/]+)/task\\.yaml")]') | ||
| echo "jobs=${jobs}" >> "$GITHUB_OUTPUT" | ||
| echo "${jobs}" | jq . | ||
|
|
||
| integration: | ||
| name: ${{ matrix.charm }} / ${{ matrix.task }} | ||
| needs: collect-spread-jobs | ||
| runs-on: ubuntu-latest | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| include: ${{ fromJson(needs.collect-spread-jobs.outputs.jobs) }} | ||
| steps: | ||
| - uses: actions/checkout@v6 | ||
| with: | ||
| persist-credentials: false | ||
| - name: Set up LXD | ||
| # `charmcraft test` packs the charm in a managed LXD VM. The Docker | ||
| # preinstalled on GHA runners drops LXD bridge traffic in the FORWARD | ||
| # chain; canonical/setup-lxd adds the LXD bridge to DOCKER-USER so the | ||
| # build VM has network access. It also sets the lxd daemon group to | ||
| # `adm` (already the runner user's group), so no `sg` wrapper is needed. | ||
| uses: canonical/setup-lxd@8c6a87bfb56aa48f3fb9b830baa18562d8bfd4ee # v1 | ||
| with: | ||
| channel: 5.21/stable | ||
| - name: Install charmcraft | ||
| run: sudo snap install charmcraft --classic | ||
| - name: Fetch any charmlibs | ||
| working-directory: examples/${{ matrix.charm }} | ||
| run: | | ||
| if grep -Eq "^charm-libs:" charmcraft.yaml; then | ||
| charmcraft fetch-libs | ||
| fi | ||
| - name: Run spread test | ||
| working-directory: examples/${{ matrix.charm }} | ||
| # GitHub Actions sets CI=true, so charmcraft test uses the `ci` | ||
| # allocator (runs against the runner itself) instead of launching | ||
| # a nested LXD VM for the spread target. | ||
| run: charmcraft test "craft:ubuntu-24.04:spread/integration/${{ matrix.task }}" | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -111,3 +111,42 @@ The option `-p k8s` tells Concierge that we want a cloud managed by Canonical Ku | |||||||||||||||
| If your charm is a machine charm, use `-p machine` instead. | ||||||||||||||||
|
|
||||||||||||||||
| The "Upload logs" step assumes that your integration tests use Jubilant together with `pytest-jubilant`. See [How to write integration tests for a charm](#write-integration-tests-for-a-charm-view-juju-logs). | ||||||||||||||||
|
|
||||||||||||||||
| This single job runs every integration test module sequentially. As your suite grows, split tests across modules and run each module in its own CI job — see {ref}`write-integration-tests-for-a-charm-split-across-modules`. | ||||||||||||||||
|
|
||||||||||||||||
| (set-up-ci-charmcraft-test)= | ||||||||||||||||
| ## Run integration tests in parallel with `charmcraft test` | ||||||||||||||||
|
|
||||||||||||||||
| If you initialised your charm with `charmcraft init --profile test-machine` or `--profile test-kubernetes` (both currently experimental), your charm includes a `spread.yaml` and one `spread/integration/<module>/task.yaml` per test module. You can use `charmcraft test` in CI to run each module as its own matrix job, so total wall-clock time is bounded by the slowest module rather than the sum of all modules. Adding a new `test_*.py` module — along with its `task.yaml` — automatically adds a new CI job. | ||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is quite dense. How about expanding it (visually) and with a bit more context on spread:
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd also consider moving the sentence about automatically adding CI jobs. I'll make a separate suggestion about that. |
||||||||||||||||
|
|
||||||||||||||||
| A minimal workflow looks like: | ||||||||||||||||
|
|
||||||||||||||||
| ```yaml | ||||||||||||||||
| integration: | ||||||||||||||||
| name: Integration / ${{ matrix.task }} | ||||||||||||||||
| runs-on: ubuntu-latest | ||||||||||||||||
| needs: | ||||||||||||||||
| - unit | ||||||||||||||||
| strategy: | ||||||||||||||||
| fail-fast: false | ||||||||||||||||
| matrix: | ||||||||||||||||
| task: | ||||||||||||||||
| - test_charm | ||||||||||||||||
| # Add one entry per spread/integration/<module>/task.yaml. | ||||||||||||||||
| steps: | ||||||||||||||||
| - uses: actions/checkout@v6 | ||||||||||||||||
| with: | ||||||||||||||||
| persist-credentials: false | ||||||||||||||||
| - name: Set up LXD | ||||||||||||||||
| uses: canonical/setup-lxd@8c6a87bfb56aa48f3fb9b830baa18562d8bfd4ee # v1 | ||||||||||||||||
| with: | ||||||||||||||||
| channel: 5.21/stable | ||||||||||||||||
| - name: Install charmcraft | ||||||||||||||||
| run: sudo snap install charmcraft --classic | ||||||||||||||||
| - name: Run spread test | ||||||||||||||||
| # On GitHub Actions (CI=true) charmcraft test runs spread against the | ||||||||||||||||
| # runner itself, instead of launching a nested LXD VM. | ||||||||||||||||
| run: charmcraft test "craft:ubuntu-24.04:spread/integration/${{ matrix.task }}" | ||||||||||||||||
| ``` | ||||||||||||||||
|
|
||||||||||||||||
| For a complete workflow that discovers modules dynamically (no hard-coded matrix), see the Ops repository's [example-charm-charmcraft-test.yaml](https://github.com/canonical/operator/blob/main/.github/workflows/example-charm-charmcraft-test.yaml). For the matching charm-side files, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm. | ||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since the example in the doc doesn't show how to discover modules, I'd move the earlier sentence here. Something like this:
Suggested change
|
||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -147,6 +147,26 @@ By convention, integration tests are kept in the charm's source tree, in a direc | |||||
|
|
||||||
| If you initialised the charm with `charmcraft init`, your charm directory should already contain a `tests/integration/test_charm.py` file. Otherwise, manually create this directory structure and a test file. You can call the test file anything you like, as long as the name starts with `test_`. | ||||||
|
|
||||||
| (write-integration-tests-for-a-charm-split-across-modules)= | ||||||
| ### Split tests across modules | ||||||
|
|
||||||
| As your suite grows, split your integration tests across several `test_*.py` modules, grouped by feature. Tests within a module share a single `juju` fixture (and therefore a single Juju model), so keep tests that need to build on each other together. Across modules, each gets a fresh model, which makes modules independent of each other. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
|
||||||
| A common split: | ||||||
|
|
||||||
| - `test_charm.py` — smoke tests: pack, deploy, and check the charm reaches active status. | ||||||
| - `test_<feature>.py` — one module per feature area (for example, `test_backups.py`, `test_tls.py`, `test_upgrade.py`, `test_scaling.py`). | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the meaning is clear enough from the examples.
Suggested change
|
||||||
|
|
||||||
| The main reason to split is **parallel CI execution**: each module can run as a separate job, so total wall-clock time is governed by the slowest module rather than the sum of all modules. `tox -e integration` still runs every module sequentially on a single machine; it's the CI matrix (see {ref}`set-up-ci-integration`) that turns module boundaries into parallel jobs. Adding a new `test_*.py` file then automatically adds a new CI job — no workflow changes needed. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I found this a bit confusing. The first example in our CI doc doesn't have a matrix, so new jobs wouldn't automatically be created for each My interpretation is more like this: Use separate modules so that you have the option of using a matrix in CI. If you're going to do that, we recommend using the |
||||||
|
|
||||||
| For real-world examples of module-per-feature splits, see: | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this would be easier to read.
Suggested change
|
||||||
|
|
||||||
| - [postgresql-operator](https://github.com/canonical/postgresql-operator/tree/main/tests/integration) — machine charm, split across many feature modules (backups, TLS, upgrades, HA, and so on). | ||||||
| - [postgresql-k8s-operator](https://github.com/canonical/postgresql-k8s-operator/tree/main/tests/integration) — the Kubernetes equivalent. | ||||||
| - [opensearch-operator](https://github.com/canonical/opensearch-operator/tree/main/tests/integration) — larger suite with per-cloud backup modules. | ||||||
|
|
||||||
| For a minimal scaffold that you can lift into your own charm, see the [httpbin-demo](https://github.com/canonical/operator/tree/main/examples/httpbin-demo) example charm in the Ops repository. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Personal preference perhaps 🙂
Suggested change
|
||||||
|
|
||||||
| ### Deploy your charm | ||||||
|
|
||||||
| Add this test in your integration test file: | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| project: httpbin-demo | ||
|
|
||
| backends: | ||
| craft: | ||
| type: craft | ||
| systems: | ||
| - ubuntu-24.04: | ||
|
|
||
| prepare: | | ||
| # Juju needs the charm etc. to be owned by the running user. | ||
| chown -R "${USER}" "${PROJECT_PATH}" | ||
|
|
||
| suites: | ||
| spread/integration/: | ||
| summary: Integration tests | ||
|
|
||
| environment: | ||
| # `uv tool install` places binaries under the invoking user's ~/.local/bin. | ||
| PATH: $PATH:/root/.local/bin | ||
|
|
||
| prepare: | | ||
| sudo snap install --classic concierge | ||
| sudo concierge prepare --trace -p k8s --extra-snaps astral-uv | ||
| uv tool install tox --with tox-uv | ||
|
|
||
| exclude: | ||
| - .git | ||
| - .tox | ||
| - .venv | ||
| - .ruff_cache | ||
| - .pytest_cache | ||
| - .coverage | ||
|
|
||
| kill-timeout: 90m |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,195 @@ | ||
| #!/bin/bash | ||
|
|
||
| usage() { | ||
| echo "usage: $(basename "$0") [command]" | ||
| echo "valid commands:" | ||
| echo " allocate Create a backend instance to run tests on" | ||
| echo " discard Destroy a backend instance used to run tests" | ||
| echo " backend-prepare Set up the system to run tests" | ||
| echo " backend-restore Restore the system after the tests ran" | ||
| echo " backend-prepare-each Prepare the system before each test" | ||
| echo " backend-restore-each Restore the system after each test run" | ||
| } | ||
|
|
||
| prepare() { | ||
| case "$SPREAD_SYSTEM" in | ||
| fedora*) | ||
| dnf update -y | ||
| dnf install -y snapd | ||
| while ! snap install snapd; do | ||
| echo "waiting for snapd..." | ||
| sleep 2 | ||
| done | ||
| ;; | ||
| debian*) | ||
| apt update | ||
| apt install -y snapd | ||
| while ! snap install snapd; do | ||
| echo "waiting for snapd..." | ||
| sleep 2 | ||
| done | ||
| ;; | ||
| ubuntu*) | ||
| apt update | ||
| ;; | ||
| esac | ||
|
|
||
| snap wait system seed.loaded | ||
| snap refresh --hold | ||
|
|
||
| if systemctl is-enabled unattended-upgrades.service; then | ||
| systemctl stop unattended-upgrades.service | ||
| systemctl mask unattended-upgrades.service | ||
| fi | ||
| } | ||
|
|
||
| restore() { | ||
| case "$SPREAD_SYSTEM" in | ||
| ubuntu* | debian*) | ||
| apt autoremove -y --purge | ||
| ;; | ||
| esac | ||
|
|
||
| rm -Rf "$PROJECT_PATH" | ||
| mkdir -p "$PROJECT_PATH" | ||
| } | ||
|
|
||
| prepare_each() { | ||
| true | ||
| } | ||
|
|
||
| restore_each() { | ||
| true | ||
| } | ||
|
|
||
| allocate_lxdvm() { | ||
| name=$(echo "$SPREAD_SYSTEM" | tr '[:punct:]' -) | ||
| system=$(echo "$SPREAD_SYSTEM" | tr / -) | ||
| if [[ "$system" =~ ^ubuntu- ]]; then | ||
| image="ubuntu:${system#ubuntu-}" | ||
| else | ||
| image="images:$(echo "$system" | tr - /)" | ||
| fi | ||
|
|
||
| VM_NAME="${VM_NAME:-spread-${name}-${RANDOM}}" | ||
| DISK="${DISK:-40}" | ||
| CPU="${CPU:-4}" | ||
| MEM="${MEM:-12}" | ||
|
|
||
| lxc launch --vm \ | ||
| "${image}" \ | ||
| "${VM_NAME}" \ | ||
| -c limits.cpu="${CPU}" \ | ||
| -c limits.memory="${MEM}GiB" \ | ||
| -d root,size="${DISK}GiB" | ||
|
|
||
| while ! lxc exec "${VM_NAME}" -- true &>/dev/null; do sleep 0.5; done | ||
| lxc exec "${VM_NAME}" -- sed -i 's/^\s*#\?\s*\(PermitRootLogin\|PasswordAuthentication\)\>.*/\1 yes/' /etc/ssh/sshd_config | ||
| lxc exec "${VM_NAME}" -- bash -c "if [ -d /etc/ssh/sshd_config.d ]; then echo -e 'PermitRootLogin yes\nPasswordAuthentication yes' > /etc/ssh/sshd_config.d/00-spread.conf; fi" | ||
| lxc exec "${VM_NAME}" -- bash -c "echo root:${SPREAD_PASSWORD} | sudo chpasswd || true" | ||
|
|
||
| # Print the instance address to stdout | ||
| ADDR="" | ||
| while [ -z "$ADDR" ]; do ADDR=$(lxc ls -f csv | grep "^${VM_NAME}" | cut -d"," -f3 | cut -d" " -f1); done | ||
| echo "$ADDR" 1>&3 | ||
| } | ||
|
|
||
| discard_lxdvm() { | ||
| instance_name="$(lxc ls -f csv | sed ':a;N;$!ba;s/(docker0)\n/(docker0) /' | grep "$SPREAD_SYSTEM_ADDRESS " | cut -f1 -d",")" | ||
| lxc delete -f "$instance_name" | ||
| } | ||
|
|
||
| allocate_ci() { | ||
| if [ -z "$CI" ]; then | ||
| echo "This backend is intended to be used only in CI systems." | ||
| exit 1 | ||
| fi | ||
| sudo sed -i 's/^\s*#\?\s*\(PermitRootLogin\|PasswordAuthentication\)\>.*/\1 yes/' /etc/ssh/sshd_config | ||
| if [ -d /etc/ssh/sshd_config.d ]; then echo -e 'PermitRootLogin yes\nPasswordAuthentication yes' | sudo tee /etc/ssh/sshd_config.d/00-spread.conf; fi | ||
| sudo systemctl daemon-reload | ||
| sudo systemctl restart ssh | ||
|
|
||
| echo "root:${SPREAD_PASSWORD}" | sudo chpasswd || true | ||
|
|
||
| # Print the instance address to stdout | ||
| echo localhost >&3 | ||
| } | ||
|
|
||
| discard_ci() { | ||
| true | ||
| } | ||
|
|
||
| allocate() { | ||
| exec 3>&1 | ||
| exec 1>&2 | ||
|
|
||
| case "$1" in | ||
| lxd-vm) | ||
| allocate_lxdvm | ||
| ;; | ||
| ci) | ||
| allocate_ci | ||
| ;; | ||
| *) | ||
| echo "unsupported backend $1" 2>&1 | ||
| ;; | ||
| esac | ||
| } | ||
|
|
||
| discard() { | ||
| case "$1" in | ||
| lxd-vm) | ||
| discard_lxdvm | ||
| ;; | ||
| ci) | ||
| discard_ci | ||
| ;; | ||
| *) | ||
| echo "unsupported backend $1" 2>&1 | ||
| ;; | ||
| esac | ||
| } | ||
|
|
||
| set -e | ||
|
|
||
| while getopts "" o; do | ||
| case "${o}" in | ||
| *) | ||
| usage | ||
| exit 1 | ||
| ;; | ||
| esac | ||
| done | ||
| shift $((OPTIND - 1)) | ||
|
|
||
| CMD="$1" | ||
| PARM="$2" | ||
|
|
||
| if [ -z "$CMD" ]; then | ||
| usage | ||
| exit 0 | ||
| fi | ||
|
|
||
| case "$CMD" in | ||
| allocate) | ||
| allocate "$PARM" | ||
| ;; | ||
| discard) | ||
| discard "$PARM" | ||
| ;; | ||
| backend-prepare) | ||
| prepare | ||
| ;; | ||
| backend-restore) | ||
| restore | ||
| ;; | ||
| backend-prepare-each) | ||
| prepare_each | ||
| ;; | ||
| backend-restore-each) | ||
| restore_each | ||
| ;; | ||
| *) | ||
| echo "unknown command $CMD" >&2 | ||
| ;; | ||
| esac |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| summary: Run test_charm integration tests | ||
|
|
||
| execute: | | ||
| cd "${SPREAD_PATH}" | ||
| CHARM_PATH="${CRAFT_ARTIFACT}" tox run -e integration -- tests/integration/test_charm.py |
Uh oh!
There was an error while loading. Please reload this page.