27 specialized AI agents for game development, AI consulting, avatar creation, and business outreach — running privately on your own hardware via Picoclaw and Qwen3.5 4B.
- README.md — setup and installation
- WORKFLOW.md — how to brief agents and run a game project
- AGENTS.md — quick slash command reference
A complete Picoclaw skill pack that turns your private hardware into a full AI game studio. Core workflows run entirely locally — no cloud subscription required for day-to-day use. Optional paid APIs (Nano Banana 2, Meshy) are gated behind a human approval step so you stay in control of spend.
Works on:
- 🍊 Orange Pi 6 Plus (ARM64) — the original YetiClaw setup
- 🍎 macOS (Apple Silicon or Intel)
- 🪟 Windows 10/11 (x64) — via Ollama + OpenClaw
- 🐧 Any Linux x86_64 or ARM64 machine
The producer is the entry point for any new game project. Give it your full creative vision — story, art style, platform, mechanics, everything — and it writes a structured brief directly, then offers to expand any section with a specialist.
Just tell the producer what you want. No need to simplify or break it into steps:
/producer write a brief for a Pac-Man style arcade game for VIVERSE.
Browser-based, mobile and desktop controls. Classic maze gameplay with modern
twists, power-ups, and online leaderboards. Pixel art style, all ages.
The producer reads your full vision and writes the brief immediately — no spawning, no waiting. You get back a structured document with game name, concept, mechanics, visual style, platform, timeline, and budget.
save
Saves to projects/[slug]/brief.md in workspace and syncs to Google Drive.
Once the brief exists, expand any section one at a time:
expand mechanics
expand narrative
expand three.js
expand levels
expand art
Each expansion spawns ONE specialist agent focused on that section only.
With the brief saved, any agent can read it directly:
/threejs-dev scaffold the project structure for the Pac-Man game
/game-designer design the power-up system
/narrative-director write the game's story and character bios
| Command | Role |
|---|---|
/creative-director |
Creative vision, tone, aesthetic, MDA review |
/technical-director |
Unity/Three.js architecture decisions (code-gen) |
/producer |
Game brief creation + sprint planning, milestones, blockers |
| Command | Role |
|---|---|
/game-designer |
Mechanics, GDDs, MDA framework, Bartle player types |
/level-designer |
Level layouts, flow, pacing, encounters |
/systems-designer |
Economy, progression, data-driven systems |
| Command | Role |
|---|---|
/gameplay-programmer |
Unity C# player mechanics, movement, combat |
/engine-programmer |
Unity C# core: save systems, scene management, Addressables |
/ai-programmer |
Unity C# enemy AI, behavior trees, NavMesh |
/ui-programmer |
Unity UI Toolkit, HUD, menus, accessibility |
/unity-specialist |
Package conflicts, editor scripting, render pipeline |
| Command | Role |
|---|---|
/art-director |
Visual style guide, art briefs, concept generation (Nano Banana 2) |
/sound-designer |
Audio identity, FMOD events, adaptive music |
/technical-artist |
Shaders, VFX Graph, asset import pipeline (code-gen) |
| Command | Role |
|---|---|
/narrative-director |
Story structure, character arcs, narrative design |
/writer |
Dialogue, UI copy, item descriptions, voice guide |
/world-builder |
Lore, factions, geography, world bible |
/qa-tester |
Test plans, bug reports (Critical / High / Medium / Low) |
| Command | Role |
|---|---|
/threejs-dev |
TypeScript Three.js games for VIVERSE (WebGL/WebXR) |
| Command | Role |
|---|---|
/email-writer |
Outreach for game dev · AI integration · AI app dev |
/ai-consultant |
AI strategy and consulting for studios and SMBs |
| Command | Role |
|---|---|
/asset-approver |
Budget gate — shows cost, waits for /approve or /deny |
/meshy |
Meshy.ai text-to-3D and image-to-3D (GLB output) |
/threejs-dev or /art-director
→ produces Asset Brief
→ /asset-approver [brief]
→ shows cost estimate in Telegram
→ you reply /approve [name] or /deny [name]
✓ approve → Nano Banana 2 or /meshy executes
✗ deny → placeholder used, no spend
Nano Banana 2 (gemini-3.1-flash-image-preview) handles image generation and understanding.
Meshy — text-to-3D and image-to-3D, GLB output saved to Google Drive.
- A Telegram bot token
- A Gemini API key (for Nano Banana 2)
- A Meshy API key (for 3D generation, Pro plan required)
- rclone configured with a
gdriveremote - Node.js 20+ (for Google Drive and Gmail MCP servers)
- Ubuntu 24.04, 16GB RAM minimum
- Custom Picoclaw binary with 600s timeout fix (see below)
- Ollama — serves models locally
- Apple Silicon recommended
- Ollama for Windows —
OllamaSetup.exe, no admin rights required - NVIDIA GPU recommended (8GB+ VRAM) or modern CPU with AVX2
Note: The base installer (llama.cpp, model download, Picoclaw, systemd services) is not included in this repo. Contact me for installation services.
tar -xzf yeticlaw-studio.tar.gz
scp -r yeticlaw-studio/ orangepi@[your_ip]:/tmp/
ssh orangepi@[your_ip]
sudo bash /tmp/yeticlaw-studio/deploy.shnano ~/.picoclaw/.security.ymlchannels:
telegram:
token: YOUR_TELEGRAM_BOT_TOKEN
model_list:
nano-banana-2:
api_key: YOUR_GEMINI_API_KEY
skills: {}
web: {}chmod 600 ~/.picoclaw/.security.ymlAlso set channels.telegram.token in config.json for compatibility.
sudo nano /etc/environment
# Add: MESHY_API_KEY=your_meshy_key_here
sudo systemctl restart yeticlaw-gatewaysu - orangepi
rclone config # n → gdrive → drive → follow OAuth
npx @piotr-agier/google-drive-mcp auth
npx @gongrzhe/server-gmail-autoauth-mcp auth# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Install other tools
brew install rclone jq node
# Launch OpenClaw with gemma4:e4b
ollama launch openclaw --model gemma4:e4b --yes
# Deploy agents
tar -xzf yeticlaw-studio.tar.gz && cd yeticlaw-studio
bash deploy-mac.shDownload and run OllamaSetup.exe from ollama.com/download/windows.
No administrator rights required. Or use PowerShell:
irm https://ollama.com/install.ps1 | iexOllama runs in the background automatically and serves on http://localhost:11434.
ollama launch openclaw --model gemma4:e4b --yesOllama will download gemma4:e4b (~9.6GB) on first run and launch the OpenClaw setup wizard to configure your Telegram bot token.
Open PowerShell in the yeticlaw-studio folder:
# Create workspace folders
$workspace = "$env:USERPROFILE\.openclaw\workspace\skills"
New-Item -ItemType Directory -Force -Path $workspace
# Copy SOUL.md + AGENTS.md
Copy-Item SOUL.md "$env:USERPROFILE\.openclaw\workspace\SOUL.md"
Copy-Item AGENTS.md "$env:USERPROFILE\.openclaw\workspace\AGENTS.md"
# Install all skills from skills-mac/ (same skills, works on Windows)
Get-ChildItem skills-mac -Directory | ForEach-Object {
$dst = "$workspace\$($_.Name)"
New-Item -ItemType Directory -Force -Path $dst | Out-Null
Copy-Item "$($_.FullName)\SKILL.md" "$dst\SKILL.md"
Write-Host " ✓ $($_.Name)"
}
Write-Host "Done — $((Get-ChildItem $workspace -Directory).Count) skills installed"Edit the OpenClaw config (created during ollama launch openclaw setup):
%USERPROFILE%\.openclaw\openclaw.json
Set your Telegram bot token and Gemini API key (for Nano Banana 2 / image generation).
winget install Rclone.Rclone
rclone config # n → gdrive → drive → follow OAuth- GPU: NVIDIA cards use CUDA automatically. AMD cards use DirectML. CPU-only works but is slower.
- Model storage: Ollama stores models in
%USERPROFILE%\.ollama\modelsby default. SetOLLAMA_MODELSenv var to change this. - Service install: For always-on use, install as a Windows service with NSSM:
nssm install ollama "C:\Users\you\AppData\Local\Programs\Ollama\ollama.exe" serve - No
deploy.sh: The bash deploy script won't run on Windows. Use the PowerShell commands above instead.
The Orange Pi setup requires a custom-built Picoclaw binary with the HTTP timeout increased from 120s to 600s. The official binary cancels responses before the model finishes generating on slow hardware.
Build it on your Mac:
git clone --depth=1 https://github.com/sipeed/picoclaw.git
cd picoclaw
sed -i '' 's/const DefaultRequestTimeout = 120 \* time.Second/const DefaultRequestTimeout = 600 * time.Second/' pkg/providers/common/common.go
GOOS=linux GOARCH=arm64 go build -tags "goolm,stdjson" -ldflags "-s -w" -o picoclaw-linux-arm64 ./cmd/picoclaw
scp picoclaw-linux-arm64 orangepi@[your_ip]:/tmp/
ssh orangepi@[your_ip]
sudo cp /tmp/picoclaw-linux-arm64 /usr/local/bin/picoclaw && sudo chmod +x /usr/local/bin/picoclawOn macOS with Ollama, the official binary works fine.
Gateway exits silently after loading
Check .security.yml exists with correct token. Verify config.json has "version": 1 (integer) and channels.telegram.enabled: true. Clean corrupt cron store:
echo '{"version":1,"jobs":[]}' > /opt/yeticlaw/openclaw/workspace/cron/jobs.json
sudo systemctl restart yeticlaw-gatewayResponses timing out (Orange Pi) Install the custom binary with 600s timeout — see section above.
Skill not loading / description too long Skill descriptions must be under 1024 characters. Check frontmatter:
head -4 /opt/yeticlaw/openclaw/workspace/skills/[skill-name]/SKILL.mdllama-server SEGV on startup Binary is corrupted. Rebuild from source:
sudo rm -rf /opt/llama.cpp
sudo git clone --depth=1 https://github.com/ggml-org/llama.cpp /opt/llama.cpp
sudo cmake -B /opt/llama.cpp/build /opt/llama.cpp -DGGML_NATIVE=OFF -DGGML_CPU_ARM_ARCH="armv9-a+sve+i8mm+dotprod" -DGGML_CPU_KLEIDIAI=ON -DCMAKE_BUILD_TYPE=Release
sudo cmake --build /opt/llama.cpp/build --config Release -j$(nproc)
sudo install -m 755 /opt/llama.cpp/build/bin/llama-server /usr/local/bin/
sudo systemctl start llama-serverrclone not syncing
rclone ls gdrive:YetiClaw
rclone config reconnect gdrive:| Feature | Orange Pi 6 Plus | macOS | Windows |
|---|---|---|---|
| Model server | llama.cpp (KleidiAI optimized) | Ollama | Ollama |
| Model | Qwen3.5 4B | gemma4:e4b | gemma4:e4b |
| Speed | ~12 tok/s gen | ~57 tok/s (M-series) | varies (GPU recommended) |
| Picoclaw binary | Custom build (600s timeout) | Official release | Official release |
| Gateway service | systemd | LaunchAgent or foreground | Foreground or NSSM service |
| Deploy script | deploy.sh (sudo required) |
deploy-mac.sh (no sudo) |
PowerShell (manual) |
| Workspace path | /opt/yeticlaw/openclaw/workspace |
~/.openclaw/workspace |
%USERPROFILE%\.openclaw\workspace |
yeticlaw-studio/
├── README.md
├── AGENTS.md ← Quick reference for all slash commands
├── SOUL.md ← Workspace routing rules (loaded every message)
├── config.json ← Picoclaw config
├── deploy.sh ← Orange Pi deploy script
├── deploy-mac.sh ← macOS deploy script
└── skills/
├── producer/ ← Brief creation + sprint planning
├── creative-director/
├── technical-director/
├── game-designer/
├── level-designer/
├── systems-designer/
├── gameplay-programmer/
├── engine-programmer/
├── ai-programmer/
├── ui-programmer/
├── unity-specialist/
├── art-director/
├── sound-designer/
├── technical-artist/
├── narrative-director/
├── writer/
├── world-builder/
├── qa-tester/
├── threejs-dev/
├── ai-consultant/
├── email-writer/
├── asset-approver/
├── meshy/
├── game-namer/ ← Atomic: generates game name options
├── concept-writer/ ← Atomic: writes 2-sentence concept
├── mechanics-designer/ ← Atomic: lists 5 core mechanics
└── style-writer/ ← Atomic: describes visual style
MIT — see LICENSE
Skills are compatible with both Picoclaw and OpenClaw SKILL.md format.
On a Mac Mini with Unity Editor installed, you can connect Picoclaw directly to Unity via MCP. This enables agents to write C# scripts directly into your Unity project, trigger compilation, read console errors, and fix them automatically.
- Install Unity 6+ with the MCP package from the Package Manager
- Enable the Unity MCP server in
~/.picoclaw/config.json:
"unity": {
"enabled": true,
"command": "npx",
"args": ["-y", "@unity/mcp-server"]
}- Open your Unity project before starting the gateway
/gameplay-programmer implement the ghost AI behaviour
→ writes PlayerBrewing.cs directly into Unity via MCP
→ triggers compile
→ reads console errors
→ fixes errors automatically
→ "✅ PlayerBrewing.cs compiled successfully, no errors"
No copy-paste required. The agent writes, compiles, and debugs in a live Unity session.
| Directory | Target | Model | Writing style |
|---|---|---|---|
skills/ |
Orange Pi | Qwen3.5 4B | Chunked — one file at a time |
skills-mac/ |
Mac / Windows | gemma4:e4b | Full context — complete files in one shot |
The Pi deploy (deploy.sh) uses skills/.
The Mac deploy (deploy-mac.sh) uses skills-mac/.