This guide will help you set up a local development environment for the RustFS Kubernetes Operator.
-
Code quality and PR gates (format, clippy, tests, console lint) are defined in
CONTRIBUTING.mdand enforced byMakefile. Runmake pre-commitfrom the repo root before opening a PR. -
justvsmake: TheJustfileprovides optional tasks (just pre-commitrunsfmt+ clippy +cargo check+cargo nextest; it does not runconsole-webchecks). For parity withCONTRIBUTING.mdandMakefile, prefermake pre-commit. -
This guide focuses on toolchain setup, clusters, and day-to-day workflowsβnot on duplicating the full command matrix from CONTRIBUTING.
-
DEVELOPMENT-NOTES.mdrecords past analysis sessions; it is not a substitute for CONTRIBUTING or this file.
-
Rust Toolchain (1.91+)
- Project uses Rust Edition 2024
- Required components:
rustfmt,clippy,rust-src,rust-analyzer
-
Kubernetes Cluster
-
kubectl
- For interacting with Kubernetes clusters
-
Optional Tools
just- Task runner (project includes Justfile)cargo-nextest- Faster test runnerdocker- For building container imagesOpenLens- Kubernetes cluster management GUI
The project uses rust-toolchain.toml to automatically manage the Rust version:
# If Rust is not installed yet
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Navigate to project directory (Rust will auto-install correct toolchain version)
cd ~/operator
# Verify installation
rustc --version
cargo --versionThe toolchain will automatically install:
rustfmt- Code formatterclippy- Code linterrust-src- Rust source coderust-analyzer- IDE support
# Install cargo-nextest (faster test runner)
cargo install cargo-nextest
# Install just (task runner)
# macOS
brew install just
# Linux
# Download from https://github.com/casey/just/releases
# Or use package managergit clone https://github.com/rustfs/operator.git
cd operator# Check Rust toolchain
rustc --version # Should be 1.91+
# Check project dependencies
cargo check
# Run formatting check
cargo fmt --all --check
# Run clippy check
cargo clippy --all-targets --all-features -- -D warningsThe operator can be built using Cargo (standard Rust build tool) or the Justfile task runner.
# Debug build (faster compilation, larger binary, slower runtime)
cargo build
# Release build (slower compilation, smaller binary, faster runtime)
cargo build --release
# Binary locations:
# Debug: target/debug/operator
# Release: target/release/operator# Build Debug binary
just build
# Build Release binary
just build MODE=releaseAfter building, the operator binary will be located at:
- Debug:
target/debug/operator - Release:
target/release/operator
You can run it directly:
# Run debug binary
./target/debug/operator --help
# Run release binary
./target/release/operator --help# Format code before building
just fmt && just build
# Run all checks before building (use make for full gate including console-web)
make pre-commit && just build MODE=release
# Clean and rebuild
cargo clean && cargo build --releasekind (Kubernetes in Docker) is the recommended tool for local Kubernetes development.
# Using Homebrew (recommended)
brew install kind
# Verify installation
kind --version# Download binary from releases
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Or using package manager (if available)
# Verify installation
kind --version# Using Chocolatey
choco install kind
# Or download from: https://kind.sigs.k8s.io/docs/user/quick-start/# Create a cluster named 'rustfs-dev'
kind create cluster --name rustfs-dev
# Verify cluster is running
kubectl cluster-info --context kind-rustfs-dev
# List clusters
kind get clusters
# Check cluster nodes
kubectl get nodes# If cluster exists but is stopped, restart it
# Note: kind clusters run in Docker containers, so they persist until deleted
# To "restart", you may need to recreate if Docker was restarted
# Check if cluster containers are running
docker ps | grep rustfs-dev
# If containers are stopped, restart Docker or recreate cluster
kind create cluster --name rustfs-dev# kind clusters run in Docker containers
# To stop, you can stop Docker or delete the cluster
# Stop Docker Desktop (macOS/Windows)
# Or stop Docker daemon (Linux)
sudo systemctl stop docker
# Note: Stopping Docker will stop all kind clusters# If Docker was restarted, kind clusters may need to be recreated
# Check cluster status
kind get clusters
# If cluster exists but kubectl can't connect, recreate it
kind delete cluster --name rustfs-dev
kind create cluster --name rustfs-dev
# Restore kubectl context
kubectl cluster-info --context kind-rustfs-dev# Delete a specific cluster
kind delete cluster --name rustfs-dev
# Delete all kind clusters
kind delete cluster --all
# Verify deletion
kind get clustersCreate a custom kind configuration file kind-config.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCPCreate cluster with custom config:
kind create cluster --name rustfs-dev --config kind-config.yamlOpenLens is a powerful Kubernetes IDE for managing clusters visually.
# Using Homebrew
brew install --cask openlens
# Or download from: https://github.com/MuhammedKalkan/OpenLens/releases# Download AppImage from releases
wget https://github.com/MuhammedKalkan/OpenLens/releases/latest/download/OpenLens-<version>.AppImage
chmod +x OpenLens-<version>.AppImage
./OpenLens-<version>.AppImage
# Or install via Snap
snap install openlens# Using Chocolatey
choco install openlens
# Or download installer from: https://github.com/MuhammedKalkan/OpenLens/releases-
Get kubeconfig path:
# kind stores kubeconfig in ~/.kube/config # Or get specific context kubectl config view --minify --context kind-rustfs-dev
-
Open OpenLens:
- Click "Add Cluster" or "+" button
- Select "Add from kubeconfig"
- Navigate to
~/.kube/config(or paste kubeconfig content) - Select context:
kind-rustfs-dev - Click "Add"
-
Verify Connection:
- You should see your kind cluster in the cluster list
- Click on it to view nodes, pods, services, etc.
- View Resources: Browse Tenants, Pods, StatefulSets, Services
- View Logs: Click on any Pod to see logs
- Terminal Access: Open terminal in Pods directly
- Resource Editor: Edit YAML files directly
- Event Viewer: Monitor Kubernetes events in real-time
The operator requires the Tenant CRD to be installed in your cluster:
# Generate CRD YAML
cargo run -- crd > tenant-crd.yaml
# Or output directly to file
cargo run -- crd -f tenant-crd.yaml
# Install CRD
kubectl apply -f tenant-crd.yaml
# Verify CRD is installed
kubectl get crd tenants.rustfs.com
# View CRD details
kubectl describe crd tenants.rustfs.comEnsure kubectl can access your cluster:
# Check current context
kubectl config current-context
# List all contexts
kubectl config get-contexts
# Switch to correct context (if needed)
kubectl config use-context kind-rustfs-dev
# Verify cluster connection
kubectl cluster-info
kubectl get nodes# Set log level (optional)
export RUST_LOG=debug
export RUST_LOG=operator=debug,kube=info
# Run operator in debug mode
cargo run -- server
# Or run in release mode (faster)
cargo run --release -- serverThe operator will:
- Connect to your Kubernetes cluster
- Watch for Tenant CRD changes
- Reconcile resources (StatefulSets, Services, RBAC)
# Build the binary first
cargo build --release
# Run the binary
./target/release/operator server# Build Docker image
docker build -t rustfs/operator:dev .
# Load image into kind cluster
kind load docker-image rustfs/operator:dev --name rustfs-dev
# Deploy using Helm (see deploy/README.md)
helm install rustfs-operator deploy/rustfs-operator/ \
--namespace rustfs-system \
--create-namespace \
--set image.tag=dev \
--set image.pullPolicy=NeverIn another terminal:
# Create a test Tenant
kubectl apply -f examples/minimal-dev-tenant.yaml
# Watch Tenant status
kubectl get tenant dev-minimal -w
# View created resources
kubectl get pods -l rustfs.tenant=dev-minimal
kubectl get statefulset -l rustfs.tenant=dev-minimal
kubectl get svc -l rustfs.tenant=dev-minimal
kubectl get pvc -l rustfs.tenant=dev-minimalRun with verbose logging:
# Set detailed log levels
export RUST_LOG=debug
export RUST_LOG=operator=debug,kube=info,tracing=debug
# Run operator
cargo run -- serverUse a debugger (VS Code):
- Install "CodeLLDB" extension
- Create
.vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug Operator",
"cargo": {
"args": ["build", "--bin", "operator"],
"filter": {
"name": "operator",
"kind": "bin"
}
},
"args": ["server"],
"cwd": "${workspaceFolder}",
"env": {
"RUST_LOG": "debug"
}
}
]
}- Set breakpoints and press F5
View operator logs (if deployed in cluster):
# Get operator pod name
kubectl get pods -n rustfs-system
# View logs
kubectl logs -f -n rustfs-system -l app.kubernetes.io/name=rustfs-operator
# View logs with timestamps
kubectl logs -f -n rustfs-system -l app.kubernetes.io/name=rustfs-operator --timestamps
# View previous logs (if pod restarted)
kubectl logs -f -n rustfs-system -l app.kubernetes.io/name=rustfs-operator --previousDebug operator pod:
# Exec into operator pod
kubectl exec -it -n rustfs-system <operator-pod-name> -- /bin/sh
# Check environment variables
kubectl exec -n rustfs-system <operator-pod-name> -- envCheck reconciliation status:
# View Tenant status
kubectl get tenant <tenant-name> -o yaml
# View Tenant events
kubectl describe tenant <tenant-name>
# View all events
kubectl get events --sort-by='.lastTimestamp' --all-namespaces
# Watch events in real-time
kubectl get events --watch --all-namespacesCheck created resources:
# View StatefulSet details
kubectl get statefulset -l rustfs.tenant=<tenant-name> -o yaml
# View Pod status
kubectl get pods -l rustfs.tenant=<tenant-name> -o wide
# View Pod logs
kubectl logs -f <pod-name> -l rustfs.tenant=<tenant-name>The operator uses the tracing crate for structured logging. Log levels:
ERROR- Errors that need attentionWARN- Warnings about potential issuesINFO- General informational messagesDEBUG- Detailed debugging informationTRACE- Very detailed tracing (very verbose)
# Set global log level
export RUST_LOG=debug
# Set per-module log levels
export RUST_LOG=operator=debug,kube=info,tracing=warn
# Common configurations:
# Development
export RUST_LOG=operator=debug,kube=info
# Production
export RUST_LOG=operator=info,kube=warn
# Troubleshooting
export RUST_LOG=operator=trace,kube=debugWhen running locally:
- Logs are output to stdout/stderr
- View in terminal where operator is running
- Can redirect to file:
cargo run -- server 2>&1 | tee operator.log
When deployed in cluster:
- Logs are stored in Pod logs
- View with:
kubectl logs -f <operator-pod-name> -n rustfs-system - Logs persist until Pod is deleted
- Use log aggregation tools (e.g., Loki, Fluentd) for long-term storage
# Terminal 1: Run operator with logging
export RUST_LOG=debug
cargo run -- server
# Terminal 2: View logs in real-time (if redirected to file)
tail -f operator.log
# Or use system log viewer (macOS)
log stream --predicate 'process == "operator"'# View current logs
kubectl logs -f -n rustfs-system -l app.kubernetes.io/name=rustfs-operator
# View logs with timestamps
kubectl logs -f -n rustfs-system -l app.kubernetes.io/name=rustfs-operator --timestamps
# View last 100 lines
kubectl logs --tail=100 -n rustfs-system -l app.kubernetes.io/name=rustfs-operator
# View logs since specific time
kubectl logs --since=10m -n rustfs-system -l app.kubernetes.io/name=rustfs-operator
# View logs from previous container (if pod restarted)
kubectl logs --previous -n rustfs-system -l app.kubernetes.io/name=rustfs-operator
# Export logs to file
kubectl logs -n rustfs-system -l app.kubernetes.io/name=rustfs-operator > operator.log- Open OpenLens
- Select your cluster
- Navigate to Workloads β Pods
- Find operator pod in
rustfs-systemnamespace - Click on pod β Logs tab
- View real-time logs with filtering options
Successful reconciliation:
INFO reconcile: reconciled successful, object: <tenant-name>
Reconciliation errors:
ERROR reconcile: reconcile failed: <error-message>
WARN error_policy: <error-details>
Resource creation:
DEBUG Creating StatefulSet <name>
INFO StatefulSet <name> created successfully
Status updates:
DEBUG Updating tenant status: <status-details>
# Run all tests
cargo test
# Use nextest (faster)
cargo nextest run
# Or use just
just test
# Run specific test
cargo test test_statefulset_no_update_needed
# Run ignored tests (includes TLS tests)
cargo test -- --ignored
# Run tests with output
cargo test -- --nocapture
# Run tests in single thread (for debugging)
cargo test -- --test-threads=1-
Create feature branch
git checkout -b feature/your-feature-name
-
Write code
-
Format code
cargo fmt --all # or just fmt -
Run checks
make pre-commit # For optional Just tasks instead, see "Documentation map" β `just pre-commit` differs (no console-web). -
Run tests
cargo test # or just test
-
Test operator locally
# Terminal 1: Run operator cargo run -- server # Terminal 2: Create test resources kubectl apply -f examples/minimal-dev-tenant.yaml kubectl get tenant -w
-
Commit code
git add . git commit -m "feat: your feature description"
The project enforces strict code quality standards:
# Run all checks (Rust + console-web; matches CONTRIBUTING / Makefile)
make pre-commit
# Optional: Justfile tasks (no console-web in `just pre-commit`)
just fmt-check # Check formatting
just clippy # Code linting
just check # Compilation check
just test # Tests (cargo nextest)Note: The project has deny-level clippy rules:
unwrap_used = "deny"- Prohibitsunwrap()expect_used = "deny"- Prohibitsexpect()
# Delete test Tenant (automatically deletes all related resources)
kubectl delete tenant dev-minimal
# Delete all Tenants
kubectl delete tenant --all# Delete kind cluster
kind delete cluster --name rustfs-dev
# Delete all kind clusters
kind delete cluster --all
# minikube
minikube delete# Clean target directory
cargo clean
# Clean and rebuild
cargo clean && cargo buildProblem: error: toolchain 'stable' is not installed
Solution:
# Navigate to project directory, rustup will auto-install correct toolchain
cd /path/to/operator
rustup showProblem: Failed to connect to Kubernetes API
Solution:
# Check kubectl configuration
kubectl config current-context
kubectl cluster-info
# Ensure cluster is running
kubectl get nodes
# For kind: check if cluster containers are running
docker ps | grep rustfs-devProblem: the server could not find the requested resource
Solution:
# Reinstall CRD
cargo run -- crd | kubectl apply -f -
# Verify CRD is installed
kubectl get crd tenants.rustfs.comProblem: Clippy reports unwrap_used or expect_used errors
Solution:
- Use
Resultand?operator - Use
matchorif letto handleOption - Use
snafufor error handling
Problem: Tests cannot run or fail
Solution:
# Run single test with detailed output
cargo test -- --nocapture test_name
# Run all tests (including ignored)
cargo test -- --include-ignoredProblem: Cannot connect to kind cluster after Docker restart
Solution:
# Recreate cluster
kind delete cluster --name rustfs-dev
kind create cluster --name rustfs-dev
# Restore kubectl context
kubectl cluster-info --context kind-rustfs-dev# Build
cargo build # Debug build
cargo build --release # Release build
# Check
cargo check # Quick compilation check
cargo clippy # Code linting
# Test
cargo test # Run tests
cargo test -- --ignored # Run ignored tests
cargo nextest run # Use nextest
# Format
cargo fmt # Format code
cargo fmt --all --check # Check formatting
# Documentation
cargo doc --open # Generate and open docs# CRD operations
kubectl get crd # List all CRDs
kubectl get tenant # List all Tenants
kubectl describe tenant <name> # View Tenant details
# Resource operations
kubectl get pods -l rustfs.tenant=<name>
kubectl get statefulset -l rustfs.tenant=<name>
kubectl get svc -l rustfs.tenant=<name>
# Logs
kubectl logs -f <pod-name>
kubectl logs -f -l rustfs.tenant=<name>
# Events
kubectl get events --sort-by='.lastTimestamp'# Cluster management
kind create cluster --name <name> # Create cluster
kind delete cluster --name <name> # Delete cluster
kind get clusters # List clusters
kind get nodes --name <name> # List nodes
# Image management
kind load docker-image <image> --name <cluster> # Load image- View CONTRIBUTING.md for contribution guidelines
- View DEVELOPMENT-NOTES.md for development notes
- View architecture-decisions.md for architecture decisions
- View ../examples/ for usage examples
Happy coding! π