An AI-powered Claude skill for creating, rigging, and refining Spine 2D skeletal animations — from raw assets to interactive previews.
▶ Live Animation Preview · 🎛 Interactive Part Editor
Genielabs is at the forefront of AI-powered 2D animation — using machine learning to automate character rigging, motion generation, and asset positioning for game production.
This repository is a direct product of that research: an open-source skill that brings industrial-grade animation automation into Claude.
👉 genielabs.tech 👈
Reach out — we'd love to hear from you.
Spine Animation AI is a Claude agent skill that lets AI handle the tedious parts of Spine 2D animation production:
- Auto-position body parts from reference images using SIFT + RANSAC
- Build skeleton JSON with proper bone hierarchy and draw order
- Generate animations (idle, walk, run, attack, wave, jump) using the 12 principles of animation
- Produce interactive HTML5 previews using the official Spine Web Player
- Correct and refine existing skeletons with AI-assisted offset adjustments
Think of it as a Spine rigging co-pilot. You provide the art assets; Claude does the math.
The easiest way — works with any Claude account, no extra software needed.
- Go to claude.ai → create a new Project
- Open Project Knowledge → Add content
- Copy the full contents of
SKILL.mdand paste it in - Save — Claude now has the full skill baked in, including all scripts embedded inline
- Upload your character assets and describe what you want:
"I have separated body part PNGs for my character.
Create idle and walk animations."
Claude will write the scripts to disk, run the pipeline, and deliver skeleton.json + preview.html.
Why it works:
SKILL.mdis auto-generated with all 4 Python scripts embedded inside it — Claude extracts and runs them without needing to clone the repo.
For Claude environments that support mounted skill directories:
git clone https://github.com/GenielabsOpenSource/spine-animation-ai.git /mnt/skills/user/spine-animationClaude will automatically discover and load the skill on next session start.
Step 0: Split a Full Character Into Parts (New!)
If you only have a single character image (not separated body parts), split_character.py
uses Google Gemini to generate a deconstructed sprite atlas and then segments it into
individual transparent PNGs via OpenCV connected-components analysis.
Prerequisites:
GEMINI_API_KEYenvironment variable — get a free key at https://aistudio.google.com/app/apikeypip install google-generativeai
GEMINI_API_KEY=your_key python3 scripts/split_character.py character.png \
--output-dir parts/Skip this step if you already have separated body-part PNGs.
1. Auto-position parts from a reference image
python3 scripts/position_parts.py \
--reference assembled_character.png \
--parts parts/ \
--output layout.json \
--debug debug/2. Build the Spine JSON
python3 scripts/build_spine_json.py \
--config layout.json \
--output skeleton.json3. Pack a texture atlas
python3 scripts/make_atlas.py \
--parts parts/ \
--output atlas/ \
--name skeleton4. Generate a self-contained HTML preview
python3 scripts/generate_spine_player.py \
--skeleton skeleton.json \
--atlas skeleton.atlas \
--atlas-image skeleton.png \
--output preview.htmlOpen preview.html in any browser — no server needed.
When Claude analyzes a skeleton and recommends corrections, it outputs adjustments in this format:
{
"adjustments": {
"right-arm": {
"original_offset": { "x": -1.5, "y": 0 },
"user_offset": { "dx": -29.4, "dy": -84.1, "drot": 0 },
"final_offset": { "x": -30.9, "y": -84.1 }
},
"head": {
"original_offset": { "x": 3, "y": 20.5 },
"user_offset": { "dx": -18.3, "dy": -2, "drot": 0 },
"final_offset": { "x": -15.3, "y": 18.5 }
}
},
"draw_order": [
"right-arm", "left-leg", "right-thigh", "right-leg",
"left-thigh", "waist", "left-hand", "torso", "hat", "head"
]
}Each entry tracks the original offset, the AI-suggested correction delta, and the applied final value. This makes adjustments reviewable, revertible, and composable.
See docs/adjustment-format.md for full specification.
The examples/sombrero/ directory contains a complete, working example:
| File | Description |
|---|---|
sombrero.json |
Full Spine skeleton with idle animation |
sombrero.atlas |
Texture atlas metadata |
sombrero.png |
Packed atlas spritesheet (10 parts) |
skeleton.json |
Skeleton-only version (no animation) |
The sombrero character has 10 body parts, 16 bones, and a 2-second idle loop with:
- Hip breathing bob
- Torso sway
- Head gentle rotation
- Hat follow-through
- Arm natural drift
Open demo/sombrero_idle.html to see it in action.
Open demo/sombrero_editor.html to interactively adjust part positions and export layout JSON.
spine-animation-ai/
├── SKILL.md ← Auto-generated (don't edit directly)
├── SKILL.template.md ← Human-editable source (edit this)
├── build_skill.py ← Builds SKILL.md from template + scripts
├── scripts/
│ ├── position_parts.py ← SIFT+RANSAC auto-positioning
│ ├── build_spine_json.py ← Spine JSON builder
│ ├── make_atlas.py ← Texture atlas packer
│ └── generate_spine_player.py ← HTML preview generator
├── .github/workflows/
│ └── build-skill.yml ← Auto-rebuilds SKILL.md on push
├── references/
│ └── spine-json-spec.md ← Spine format reference
├── examples/
│ └── sombrero/ ← Full working example
│ ├── sombrero.json
│ ├── sombrero.atlas
│ └── sombrero.png
├── demo/
│ ├── sombrero_idle.html ← Idle animation preview
│ ├── sombrero_editor.html ← Interactive part editor
│ └── spine_animation_preview.html
└── docs/
├── getting-started.md
├── adjustment-format.md
└── claude-prompting-guide.md
The position_parts.py script uses a two-phase algorithm:
- Extract SIFT keypoints from each body part (alpha-masked to visible pixels)
- Extract keypoints from the assembled reference image
- Match with FLANN matcher + Lowe's ratio test
- Estimate a similarity transform (translate + scale + rotate) via RANSAC
- Accept if 4+ inlier matches are found; fall back otherwise
This is more robust than full homography for stylized game art with sparse features.
For every overlapping pair of positioned parts:
- Sample pixels in the overlap region
- Compare each pixel to the reference image
- The part whose color is closer to the reference is "on top"
- Build a directed occlusion graph → topological sort → draw order
Parts too small for SIFT (accessories, tiny objects) fall back to
alpha-masked TM_CCORR_NORMED at multiple scales with background penalty scoring.
Python 3.9+
opencv-python >= 4.8
Pillow >= 10.0
numpy >= 1.24
Install dependencies:
pip install opencv-python Pillow numpyThe best prompts for this skill are specific about assets and intent:
✅ Good:
"I have separated body part PNGs for a robot character (head.png, torso.png,
left-arm.png, right-arm.png, left-leg.png, right-leg.png) and a reference
image showing the assembled character. Create idle and walk animations."
✅ Also good:
"Here's my Spine JSON. The right arm is positioned wrong — it should be
lower and more to the left. Also add a wave animation."
❌ Too vague:
"Animate this character"
See docs/claude-prompting-guide.md for more examples.
| Preset | Duration | Technique |
|---|---|---|
idle |
2.0s loop | Overlapping sine waves on hip/torso/head |
walk |
0.8s loop | Opposing arm-leg swing with hip bob |
run |
0.5s loop | Exaggerated walk + forward lean + bounce |
wave |
1.2s | Raise arm, oscillate forearm |
jump |
1.0s | Anticipation squat → launch → land |
attack |
0.6s | Windup → strike → follow-through |
All presets use bezier easing ([0.25, 0, 0.75, 1]) following the
12 principles of animation.
SKILL.md is auto-generated — don't edit it directly. Instead:
- Edit
SKILL.template.md(the prose and instructions) - Edit scripts in
scripts/(the actual code) - Push to
main— GitHub Actions runsbuild_skill.pywhich:- Reads
SKILL.template.md - Finds all
<!-- EMBED:scripts/filename.py -->markers - Injects the actual script contents into collapsible
<details>blocks - Commits the updated
SKILL.mdautomatically
- Reads
This means SKILL.md is always a self-contained document — when someone pastes it into Claude Projects, Claude has the actual script code right there, no cloning needed.
To build locally:
python build_skill.pyPRs welcome! See CONTRIBUTING.md.
Ideas for contributions:
- New animation presets (dance, swim, fly, cast spell...)
- Support for non-humanoid rigs (animals, vehicles, abstract)
- Better occlusion detection for heavily layered characters
- Blender / Aseprite asset pipeline integration
- Spine Runtimes integration examples (Unity, Godot, Phaser)
PolyForm Noncommercial 1.0.0 — free for all non-commercial use. Commercial use requires a separate license — contact us at genielabs.tech.
Built with:
- Spine by Esoteric Software — the industry-standard 2D animation tool
- OpenCV — SIFT feature detection and RANSAC
- Claude AI — the brain
- OpenClaw — the agent runtime
Made with ✨ by the community · Star if useful!
