Tools that help you decide what to build — and often, what not to build.
Many people extending Claude Code ask: "What Skill should I add?" or "How do I write a Skill?"
After lots of rounds of iteration and exploration, I've found that often the better question: "Do I actually need one?"
Metaskills are skills about skills and thinking — tools that help you build other tools, but the result is often not to build one.
The value isn't in producing more Claude Code extensions. It's in helping you figure out:
- Do I actually need a Skill, or would a command work?
- Do I need anything at all, or is conversation and the context I provide enough?
- What shape does my knowledge take and how can I best share it with Claude?
One goal with sharing this is to highlight what has worked for me and how I think that you can build implicit knowledge yourself. Claude Opus 4.5 is incredible at determining what is actually needed, and what knowledge or context is missing.
In talking with friends and colleagues, I've found that people who want to extend Claude Code often face two problems:
- They don't know the format — What's a SKILL.md? What's the frontmatter? What's the format? Where do I put this?
- They don't know the decisions — Should this be a skill or a command? What structure fits my knowledge?
Template-based tools solve problem #1. Metaskills helps solve problem #2.
You think: "I want Claude to automatically apply my code review standards."
You run /skill-builder. It asks questions:
- "Walk me through how you actually do a review..."
- "What do you look for first? What's the mental process?"
- "Does it depend on the type of code, or is it the same every time?"
Through the conversation, you realize:
- Your review process is the same every time (not contextual)
- You always run it explicitly, not automatically
- You want to trigger it, not have Claude apply it silently
Recommendation: A /code-review command, not a Skill.
The interview surfaced what you actually needed — which was different from what you asked for.
After several runs with Claude Opus, /skill-builder almost always recommends a slash command over a full skill.
Most expertise fits better as a focused, reusable interview or structured output — exactly what commands excel at. Full skills shine only when you need persistent state or complex multi-turn behavior across sessions.
Across all our testing, the only case where a full Skill was the right answer is an adaptive Zork player that Claude plays and then updates learnings, and applies those learnings from past games. This needs real state persistence — save files, accumulated learnings, Obsidian sync. That pushed this into "Skill" territory.
The metaskill works — it just prefers simplicity where possible.
This repo codifies patterns from real collaboration — the implicit knowledge, the questions asked pulled from experience over months of collaboration across a spectrum of projects, and the types of decisions and context that shapes good outcomes.
We built these by:
- Working together on real projects over many sessions
- Capturing transcripts and archives of what worked
- Identifying patterns in our most productive collaborations
- Packaging those patterns as reusable commands
This is a living system. The tools evolve as our collaboration deepens. What you see here is a snapshot of patterns we've found valuable, not a finished methodology.
Collaborative discovery for new projects. This is how we start things — figuring out what we're building before jumping to implementation.
/explore
│
▼
"What are we building?"
"Is there reference code to look at?"
"What's the riskiest assumption?"
│
▼
Shared understanding emerges
│
├── Ready to document? → /create-prd
└── Ready to build? → Just start
Light structure, conversational flow, natural next steps.
Interviews you about your expertise, identifies patterns, recommends skill vs command, and generates the artifact.
You → Run /skill-builder
→ Answer questions about your expertise
→ Get a generated skill or command
Features:
- Conversational interview flow (not form-filling)
- Refinement branch — collaborate on improvements before packaging
- Skill vs command recommendation based on your knowledge shape
- Detection-based destination (respects your
dotfilesworkflow if you have one. If not, I suggest this approach!)
Interview-driven PRD generation with fidelity detection. Adapts depth based on how much you already know.
Generated BY /skill-builder — an example of the system creating its own tools.
/explore (or just start talking)
│
▼
Collaborative discovery
(read reference code, discuss approaches)
│
▼
"Ready to document this?"
│
▼
/create-prd
│
▼
Living PRD captures decisions
│
▼
Build, learn, context shifts
│
▼
/explore again ←───────────────────┐
│ │
▼ │
PRD evolves with new understanding─┘
│
├──→ Need automation? → /hook-builder (coming)
└──→ Need to package expertise? → /skill-builder
This is cyclical, not linear. Running /explore on a project with an existing PRD still surfaces tacit knowledge — the "why" behind the "what." PRDs are snapshots of current understanding, not finished specs.
Human judgment between each step is the methodology. We provide structured entry points, not an automated pipeline.
# Clone the repo
git clone https://github.com/jonathanprozzi/claude-metaskills.git
cd claude-metaskills
# Symlink commands you want
ln -s $(pwd)/commands/explore.md ~/.claude/commands/explore.md
ln -s $(pwd)/commands/skill-builder.md ~/.claude/commands/skill-builder.md
ln -s $(pwd)/commands/create-prd.md ~/.claude/commands/create-prd.mdOr copy files directly to ~/.claude/commands/.
This isn't a collection of standalone prompts. It's patterns extracted from real collaboration:
The process:
- Work on real projects together (not demos)
- Capture session transcripts and archives
- Review what worked — what questions led to clarity? what flows felt natural?
- Package patterns as commands
- Test by using them ourselves
- Iterate based on friction
Example: The /explore command was built by reviewing a transcript where we designed a new project. We noticed the pattern: "What are we building?" → read reference code → clarify assumptions → plan emerges. That became the command structure.
The archives are the source material. Our daily session transcripts, checkpoint exports, and handoff notes — that's where the methodology lives. The commands are just the interface.
Not everyone needs the same level of structure:
Level 0: Ad-hoc prompting
"Interview me about X"
→ Zero friction, works if you have implicit knowledge
Level 1: /explore
→ Light structure, surfaces good questions
→ Entry point for most users
Level 2: /create-prd + /skill-builder
→ Specific outputs with decision logic
→ When you know what you want to produce
Level 3: Full methodology
→ Living docs, lenses, cross-session context
→ Deep collaboration patterns
Enter at whatever level fits. The commands aren't better than ad-hoc prompting — they're helping surface your knowledge and can provide a process to build implicit knowledge.
See detailed docs:
- docs/methodology.md — collaboration philosophy
- docs/skills-vs-commands.md — when to use what
Core principles (inspired by Ethan Mollick's centaur/cyborg framing):
- Generative over extractive — The conversation creates new understanding, not just captures existing knowledge
- Conversational over form-filling — Interview, don't interrogate
- Decisions over format — Help people figure out what to build
- Refinement over pure capture — Collaborate, don't just record
- Cyclical over linear — Return to
/exploreas understanding deepens
Shipped:
/explore— Collaborative discovery entry point/skill-builder— Interview → generate skills or commands/create-prd— Interview → structured PRD (generated by skill-builder)
In Progress (dogfooding in dotfiles):
/hook-builder— Interview → generate hooks + config
The system builds itself: /create-prd was generated by /skill-builder. Future commands will be built the same way.
The capture → curation model works well for teams:
Domain expert runs /skill-builder
│
▼
Answers interview questions
│
▼
Chooses "Show me the raw files"
│
▼
Technical teammate reviews + adds to team repo
Metaskills handle capture. A technical curator handles distribution. This keeps the process accessible while maintaining quality.
vs. Anthropic's official skills: They provide skills. We provide tools to build skills.
vs. Claude Code Skill Factory: They provide templates (format). We provide interviews (decisions).
vs. ad-hoc prompting: Same spirit, but with implicit knowledge baked in. You don't need to know which questions to ask — the commands surface them.
MIT
A living system by @jonathanprozzi and Claude
Patterns discovered through collaboration, packaged for others to build on.