My personal infrastructure's duct tape. Projects that didn't yet warrant making into separate repositories.
"Ducktape" is a personal infrastructure repository — "duct tape" for personal infrastructure needs.
Manages configuration for: agentydragon (ThinkPad), gpd (GPD Win Max 2), vps, atlas (Proxmox/k3s).
| Directory | Purpose |
|---|---|
agent_cli/ |
Agent REPL CLI |
agent_server/ |
FastAPI backend, runtime, policy |
cluster/ |
k8s cluster |
mcp_infra/ |
MCP compositor and utilities |
agent_pkg/ |
Agent package infrastructure |
tana/ |
Tana export toolkit |
wt/ |
Worktree management |
ansible/ |
System configuration |
docker/ |
Container images |
dotfiles/ |
Shell configs, scripts |
props/ |
LLM critic eval system |
| Directory | Purpose |
|---|---|
finance/ |
Portfolio tracking (Rust) |
trilium/ |
Trilium Notes extensions |
inventree_utils/ |
InventTree plugins |
website/ |
Personal website (Hakyll) |
k8s_old/ |
legacy k3s cluster |
These components exist but see minimal recent changes:
- Worthy: Rust-based portfolio tracker (uses Cargo/Bazel)
- Reconciliation utilities for various financial systems
- Trilium Notes (
trilium/): Extensions and widgets - Tana Export (
tana/): Export utilities
- InventTree (
inventree_utils/): Inventory management plugins - Website (
website/): Personal website (Hakyll/Haskell)
This repository uses Bazel as the unified build system for all Python packages and most other components.
requirements_bazel.txt- Single source of truth for Python dependencies- All Python packages have
BUILD.bazelfiles defining targets - Linting via Bazel aspects (runs by default; use
--config=nolintto skip) - Python 3.13+ is the target runtime version
Adding dependencies:
- Add the constraint to
pyproject.toml(the single source of truth for Python dependency constraints) - Run
bazel run //:requirements.updateto regenerate the lockfile (requirements_bazel.txt— never edit manually) - Use
@pypi//package_namein BUILD.bazel deps
This repository uses Gazelle-compatible patterns for Python BUILD files. This enables automatic BUILD file generation and maintenance via bazel run //tools:gazelle.
Key pattern: One py_library per .py file (no aggregators)
# CORRECT - per-file targets
py_library(
name = "client",
srcs = ["client.py"],
deps = ["//other_pkg:specific_target"],
)
py_library(
name = "server",
srcs = ["server.py"],
deps = [":client"],
)
# WRONG - aggregator bundling multiple files
py_library(
name = "my_package", # Don't do this
srcs = ["client.py", "server.py"],
deps = [...],
)Rules:
- No aggregator targets - Each
.pyfile gets its ownpy_librarywithnamematching the file stem - Reference specific targets - Use
//pkg:modulenot//pkg(e.g.,//openai_utils:modelnot//openai_utils) - Use
imports = [".."]- Bazel auto-generates__init__.pystubs; don't create real__init__.pyfiles
Running Gazelle:
bazel run //tools:gazelle # Update BUILD files
bazel run //tools:gazelle -- --mode=diff # Preview changesbazel build //finance/worthy:rust_main
bazel test //finance/worthy/...Adding dependencies:
- Add to root
Cargo.toml - Run
CARGO_BAZEL_REPIN=1 bazel build @crates//:allto updateCargo.Bazel.lock - Use
@crates//crate_namein BUILD.bazel deps
To enable BuildBuddy remote caching and build event streaming, create ~/.config/bazel/buildbuddy.bazelrc:
# BuildBuddy configuration
build --bes_results_url=https://app.buildbuddy.io/invocation/
build --bes_backend=grpcs://remote.buildbuddy.io
common --remote_cache=grpcs://remote.buildbuddy.io
common --remote_timeout=10m
common --remote_header=x-buildbuddy-api-key=YOUR_API_KEY_HERE
This file is loaded via try-import in ~/.bazelrc and is silently ignored if missing.
Alternatively, run tools/setup_buildbuddy.sh to generate the file interactively.
When BuildBuddy is configured via setup_buildbuddy.sh, remote execution is enabled automatically. Build and test actions run on BuildBuddy workers (falling back to local), using the //:rbe_linux_x64 platform.
The RBE worker image (ghcr.io/agentydragon/rbe-worker) is built from <tools/rbe_image/Dockerfile>, based on BuildBuddy's rbe-ubuntu24-04 image (which provides Docker CE, iptables-legacy for Firecracker compatibility, build-essential, python3, git, etc.). We layer on Rust toolchain deps, GHC's libtinfo5, and Chromium shared libraries. The image is built and pushed by the rbe-image.yml CI workflow.
Most configuration has migrated to Nix home-manager (see nix/home/home.nix).
- Shell configs:
programs.{bash, zsh, atuin, direnv, zoxide, eza} - Shell init scripts:
nix/home/shell/(bash-init.sh, zsh-init.zsh, common-init.sh) - Aliases:
home.shellAliases - Environment variables:
home.sessionVariables - Powerlevel10k:
nix/home/p10k.zsh→~/.p10k.zsh
~/.profile- Complex conditional PATH management and legacy integrations (CUDA, lesspipe, dotnet, pnpm, machine-specific config)~/.secret_env- Secret environment variables (not tracked in git)~/.config/*- Application configs not yet migrated~/.local/bin/*- Utility scripts (theme switchers, backup utilities)- rcm config -
rcrccontrols symlink behavior for remaining dotfiles
- DO NOT modify dotfiles directly in
~/- edit source files indotfiles/ornix/home/ - Shell configs are Nix-managed - do not edit
~/.bashrc,~/.zshrc,~/.shellrcdirectly
- Nix config:
home-manager switch --flake ~/code/ducktape/nix/home#<hostname> - Remaining dotfiles: Via rcm (managed by Ansible role
cli/tasks/dotfiles.yml)
See dotfiles/docs/shell_configuration.md for detailed loading order and migration status.
The ansible/ directory contains system configuration.
See <ansible/README.md> for details.
agentydragon.yaml- Main laptop configurationvps.yaml- VPS server deploymentgpd.yaml- GPD laptop setupwyrm.yaml- Wyrm desktop provisioning
- System Base:
cli/,gui/,common/ - Development:
golang/,dev_env/,dev_clojure/ - Services:
trilium_server/,headscale_server/,syncthing_server/ - Networking:
tailscale_client/
- Headscale: Self-hosted Tailscale controller (100.64.0.0/10)
- Syncthing: Cross-device file synchronization
This repository uses Bazel as the primary build system. Install the git pre-commit hook:
pre-commit installThis installs the pre-commit framework which runs ruff, buildifier, rustfmt, prettier and other linters on staged files, checks for conflict markers, validates syntax, and more (see .pre-commit-config.yaml). ESLint and mypy run automatically on every bazel build (use --config=nolint to skip).
Files excluded from linting and formatting are controlled in two places:
| File | Purpose | Read by |
|---|---|---|
.gitattributes |
Source of truth for all exclusions | pre-commit hooks |
ruff.toml exclude |
Must mirror Python patterns | ruff check (lint aspect) |
Why two files? Pre-commit hooks check .gitattributes for exclusions, but ruff's linter only reads ruff.toml. For Python files, patterns must exist in both.
To exclude a file/directory:
- Add
path/** rules-lint-ignored=trueto.gitattributes - If it contains Python, also add to
ruff.toml exclude
BuildBuddy Workflows provides fast, Bazel-native CI with remote execution. The buildbuddy.yaml configuration runs:
bazel test //...- Run all tests (excludinglive_openai_api)bazel build //...- Unified quality checks (ruff, ESLint, mypy, clippy, rustfmt; runs by default)
Setup:
- Enable BuildBuddy Workflows for this repository at https://app.buildbuddy.io
- BuildBuddy will automatically use your RBE configuration from
tools/setup_buildbuddy.sh - Workflow runs appear in GitHub PRs as status checks
Architecture:
- BuildBuddy: Bazel build/test/lint (fast RBE, dependency-aware parallelization)
- GitHub Actions: Non-Bazel tasks (ansible-lint, nix-flake-check, pre-commit) and artifact publishing
Artifacts (wheels, container images) are built by BuildBuddy, then published by GitHub Actions:
- Wheels → GitHub Releases
- Container images → GitHub Container Registry (GHCR)
GitHub Actions handles:
- Non-Bazel workflows (ansible, nix, pre-commit)
- Release publishing (wheels, Docker images)
- RBE image builds
See .github/workflows/ for workflow definitions.
AGPL 3.0
To update Python requirements lock:
bazel run //:requirements.updateTo format Bazel configuration files:
bazel run //tools/lint:buildifierUse act to dry-run .github/workflows/ci.yml. With Nix:
# From repo root
nix run nixpkgs#act -- -W .github/workflows/ci.yml \
-P ubuntu-latest=catthehacker/ubuntu:act-latestTips:
actneeds Docker. Make suredocker pull catthehacker/ubuntu:act-latestworks first.- Use
act -j <job-name>to run a single job.