CVM base image deployment toolkit - Build, package, and deploy secure workloads to Confidential Virtual Machines.
Atakit is a command-line tool for deploying containerized workloads to Automata Linux CVMs across major cloud providers. It handles:
- Building workload packages from Docker Compose definitions
- Managing CVM base images
- Deploying to GCP, Azure, or local QEMU
- Registering workloads on-chain via smart contracts
git clone https://github.com/automata-network/atakit
cd atakit
just installThe binary will be available at atakit.
- Rust: 2024 edition or later
- just: Command runner (installation)
- Cloud CLI tools:
gclouddepending on target platform - QEMU: For local development (optional)
Cloud account permissions required:
- Create/delete VMs and disks
- Manage network and firewall rules
- Access cloud storage (for disk images)
# List available images
atakit image ls
# Download an image
atakit image pull automata-linux:v0.1.0Create an atakit.json configuration file in your project directory:
{
"workloads": [
{
"name": "my-workload",
"version": "v0.0.1",
"image": "automata-linux:v0.1.1",
"docker_compose": "./docker-compose.yml"
}
],
"disks": [
{
"name": "my-data",
"size": "10GB"
}
],
"deployment": {
"my-deployment": {
"workload": "my-workload-tdx",
"platforms": {
"gcp": { "vmtype": "c3-standard-4" }
}
}
}
}Create a docker-compose.yml for your workload:
services:
app:
build: .
image: my-workload:v0.0.1
ports:
- "8080:8080"
volumes:
- ./config:/app/config:ro
- ./cvm-agent.sock:/app/cvm-agent.sock
- my-data:/data
volumes:
my-data:Create a Dockerfile:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]💡 See
workload_examples/for complete working examples.
atakit workload build my-workloadThis creates a .tar.gz package containing:
- Docker Compose definitions
- Measured files for attestation
- Docker images
atakit workload publish my-workload \
--rpc-url $RPC_URL \
--owner-private-key $PRIVATE_KEY# Deploy to GCP
atakit deploy my-deployment --platform gcp
# Or deploy locally with QEMU
atakit deploy my-deployment --qemugcloud compute instances get-serial-port-output ${instance_name} --zone=${zone}The main project configuration file.
{
"workloads": [
{
"name": "workload-name", // Workload identifier
"version": "v0.0.1", // Version (must start with 'v')
"image": "automata-linux:v0.1.1", // Base image reference
"docker_compose": "./path/to/docker-compose.yml"
}
],
"disks": [
{
"name": "disk-name",
"size": "10GB",
"encrypted": false // Optional
}
],
"deployment": {
"deployment-name": {
"workload": "workload-name",
"platforms": {
"gcp": {
"vmtype": "c3-standard-4",
"zone": "us-central1-a" // Optional
},
"azure": {
"vmtype": "Standard_DC4s_v3",
"region": "eastus" // Optional
}
}
}
}
}Atakit analyzes your docker-compose.yml to extract services, volumes, and configurations. Key requirements:
- Image references: Use full registry paths (e.g.,
docker.io/library/nginx) - Bind mounts: Must be read-only (
:ro) except for the CVM agent socket - Named volumes: Each volume must be owned by exactly one service
Example:
services:
app:
image: my-app:v0.0.1
ports:
- "8080:8080"
volumes:
- ./config:/app/config:ro # Measured config
- ./additional-data/key:/app/key:ro # Runtime data
- app-data:/data # Persistent volume
- ./cvm-agent.sock:/app/cvm-agent.sock # Agent socket
volumes:
app-data:Inside the CVM, workloads can access the CVM agent via a Unix socket at /app/cvm-agent.sock. The agent provides cryptographic signing and key management APIs.
Socket Access with curl:
curl --unix-socket /app/cvm-agent.sock http://localhost/<endpoint>Sign an arbitrary message using the session key. Returns a secp256k1 signature along with session metadata.
Request:
{
"message": "0x48656c6c6f"
}| Field | Type | Description |
|---|---|---|
message |
hex string | Message bytes to sign (hex-encoded with 0x prefix) |
Response:
{
"signature": "0x...",
"sessionId": "0x...",
"sessionKeyPublic": {
"typeId": 3,
"key": "0x..."
},
"sessionKeyFingerprint": "0x...",
"ownerKeyPublic": {
"typeId": 3,
"key": "0x..."
},
"ownerFingerprint": "0x...",
"workloadId": "0x...",
"baseImageId": "0x..."
}| Field | Type | Description |
|---|---|---|
signature |
hex string | secp256k1 signature (65 bytes: r || s || v) |
sessionId |
bytes32 | Current session ID |
sessionKeyPublic.typeId |
uint8 | Key type: 2=P-256, 3=secp256k1 |
sessionKeyPublic.key |
hex string | Public key bytes |
sessionKeyFingerprint |
bytes32 | Session key fingerprint |
ownerKeyPublic |
object | Owner key public identity |
ownerFingerprint |
bytes32 | Owner identity fingerprint |
workloadId |
bytes32 | Workload ID |
baseImageId |
bytes32 | Base image ID |
Example:
# Sign a message (hex-encoded "Hello")
curl --unix-socket /app/cvm-agent.sock \
-X POST \
-H "Content-Type: application/json" \
-d '{"message": "0x48656c6c6f"}' \
http://localhost/sign-messageRotate the session key and register the new key on-chain. This generates a new session keypair and submits a transaction to update the session registry.
Request:
{}Response:
{
"sessionId": "0x...",
"sessionKeyFingerprint": "0x...",
"sessionKeyPublic": {
"typeId": 3,
"key": "0x..."
},
"txHash": "0x..."
}| Field | Type | Description |
|---|---|---|
sessionId |
bytes32 | New session ID after rotation |
sessionKeyFingerprint |
bytes32 | New session key fingerprint |
sessionKeyPublic |
object | New session key public identity |
txHash |
bytes32 | On-chain transaction hash |
Example:
# Rotate the session key
curl --unix-socket /app/cvm-agent.sock \
-X POST \
-H "Content-Type: application/json" \
-d '{}' \
http://localhost/rotate-keyWorkloads use a standard directory layout:
my-workload/
├── docker-compose.yml # Service definitions
├── config/ # Measured files (included in attestation)
│ └── app.conf
└── additional-data/ # Runtime data (not measured)
└── secrets.json
The sim-agent command provides a complete local development environment by simulating the CVM agent. It:
- Starts an embedded Anvil node that forks from a remote chain
- Registers a temporary workload with a dev version (default:
dev-YYYYMMDD) to the on-chain WorkloadRegistry - Serves mock
/sign-messageand/rotate-keyendpoints over Unix sockets
We recommend forking from a remote chain so that the Automata contracts (SessionRegistry, BaseImageRegistry, etc.) are already available:
anvil --fork-url https://rpc.example.com --hardfork osakaThis gives you a local chain at http://localhost:8545 with pre-funded test accounts.
The sim-agent requires a SessionRegistryMock contract on chain. You can check available contract addresses with:
atakit registry lsatakit sim-agent --rpc-url http://localhost:8545 my-workloadThe output will show the temporary workload reference and its ID:
Workload: my-workload:dev-20260226 (workload_id: 0xabcd...)
The --rpc-url points to your local Anvil. The sim-agent starts a second embedded Anvil (default port 14345, configurable with --anvil-port) that forks from it. This second Anvil is accessible at http://0.0.0.0:14345 for external tools like cast or your own scripts.
By default the dev version changes once per day (
dev-YYYYMMDD). You can pin it with--dev-version dev, but make sure that version is not already registered in the WorkloadRegistry.
With the first Anvil running at localhost:8545, you can deploy your contracts and add the dev workload_id from step 2 to your contract's whitelist:
# Deploy your contract to the local Anvil
forge script script/Deploy.s.sol --rpc-url http://localhost:8545 --broadcast
# Whitelist the dev workload ID
cast send <YOUR_CONTRACT> "addWorkload(bytes32)" 0xabcd... --rpc-url http://localhost:8545atakit workload build my-workloadatakit sim-agent --rpc-url http://localhost:8545 my-workloadThe sim-agent will register the temporary workload on its embedded Anvil (which forks from localhost:8545, inheriting your deployed contracts and whitelist) and start serving the CVM agent API on Unix sockets.
docker compose -f docker-compose.yml upYour services can now call the simulated CVM agent via the Unix socket (e.g., ./cvm-agent.sock) just like they would in a real CVM.
atakit sim-agent [OPTIONS] --rpc-url <RPC_URL> [WORKLOAD]...
| Option | Description |
|---|---|
[WORKLOAD]... |
Workload names from atakit.json. If omitted, all workloads are started |
--rpc-url <URL> |
Remote RPC endpoint (used as Anvil fork URL) |
--dev-version <VER> |
Dev workload version (default: dev-YYYYMMDD) |
--anvil-port <PORT> |
Anvil listen port (default: 14345) |
--session-registry <ADDR> |
SessionRegistry address (auto-detected if omitted) |
Query on-chain registry data.
atakit registry query image automata-linux:v0.1.0 --rpc-url <RPC_URL>Shows the full base image hierarchy: spec, platform profiles, invariant PCRs, and measurement variants.
atakit registry query workload guardian:v0.1.0 --rpc-url <RPC_URL>Queries the WorkloadRegistry contract and prints the workload spec.
Atakit supports both Docker and Podman. The container engine is resolved in this order:
CONTAINER_ENGINEenvironment variable (dockerorpodman)- Global config preference (
~/.atakit/config.json) - Auto-detect (tries docker first, then podman)
# Set the default container engine
atakit config default-container-engine podman
# Show the current default
atakit config default-container-engine
# Override per-invocation via env var
CONTAINER_ENGINE=docker atakit workload buildNote: Podman does not support cross-platform builds. If the target platform (e.g.,
linux/amd64) does not match your host architecture, switch to Docker.
| Variable | Description |
|---|---|
RUST_LOG |
Logging level (e.g., info, debug) |
ATAKIT_HOME |
Override default data directory |
CONTAINER_ENGINE |
Container engine override (docker or podman) |
# Debug build
cargo build
# Release build
cargo build --releaseFor local development without cloud resources:
# Deploy with QEMU
atakit deploy my-deployment --qemu
# Instance files are stored in ~/.atakit/qemu/<instance-name>/Apache-2.0