No pipeline code. No SDK learning curve. Just tag your cells and deploy.
Latest News 🔥
- [2026/04] Kubeflow Kale v2.0 is officially released with support for Kubeflow Pipelines 2.16.0!
- [2026/04] The new Kubeflow Kale docs is now available!
KALE (Kubeflow Automated pipeLines Engine) is a tool designed to simplify the deployment of Kubeflow Pipelines workflows.
You've built an amazing ML model in a Jupyter notebook. Now you need to run it in production, schedule it, or scale it up. Usually that means rewriting everything as a Kubeflow Pipeline — learning the SDK, restructuring your code, and debugging YAML.
Kale eliminates all of that.
Tag your notebook cells with simple labels like imports, step, or skip. Kale analyzes your code, detects dependencies between steps, and generates a production-ready Kubeflow Pipeline. Your notebook stays exactly as it is.
✅ push-button pipeline generation (tag cells, click "compile") ✅ automatic dependency detection ✅ same notebook for dev and production ✅ create pipelines directly in notebook without looking at YAML ✅ requires no direct knowledge of KFP SDK ✅ no rewriting code into pipeline components
📖 Documentation: https://kale.kubeflow.org
Watch how to use Kale convert a notebook to a pipeline in minutes:
- Python 3.11+
- Kubeflow Pipelines v2.16.0+ (install guide)
- Kubernetes cluster (minikube, kind, or any K8s cluster)
# Clone and set up (v2.0 coming soon to PyPI)
git clone https://github.com/kubeflow-kale/kale.git
cd kale && make dev
# Launch JupyterLab with Kale
make jupyter- Open any notebook from
examples - Click the Kale icon in the left sidebar
- Toggle Enable to see your notebook as a pipeline graph
- Click Compile and Run — that's it!
# Or use the CLI
kale --nb examples/base/candies_sharing.ipynb --kfp_host http://localhost:8080 --run_pipelineTag your notebook cells to define pipeline structure:
| Cell Tag | What It Does |
|---|---|
imports |
Libraries needed by all steps (pandas, sklearn, etc.) |
functions |
Helper functions available to all steps |
pipeline-parameters |
Variables users can tune between runs |
pipeline-metrics |
Metrics to track in the KFP UI |
step:step_name |
A pipeline step (Kale auto-detects dependencies!) |
skip |
Exploratory code to exclude from the pipeline |
Learn more about cell types in Kale documentation.
Pro tip: Kale automatically detects which variables flow between steps. You don't need to specify inputs and outputs — just write natural notebook code.
Check out the example notebooks to see these in action.
Make it sure that Kubeflow Pipelines is running and acessible to Kale
Install from PyPi:
pip install "jupyterlab>=4.0.0" kubeflow-kale[jupyter]
jupyter labYou can install Kale directly from the sources:
git clone https://github.com/kubeflow-kale/kale.git
make clean && make dev && make jupyter-kfp
See CONTRIBUTING.md for detailed setup instructions.
Run Kale locally with Docker:
make docker-build # Build the image
make docker-run # JupyterLab at http://localhost:8889Connect to a real KFP cluster:
make kfp-serve # Serve dev wheel
kubectl port-forward -n kubeflow svc/ml-pipeline 8080:8888 # Forward KFP API
make docker-run # Start containerWe'd love to have you!
- Questions? Join #kubeflow on Slack
- Found a bug? Open an issue
- Want to contribute? See CONTRIBUTING.md
- FAQ — Common questions and known limitations
- Blog: Automating Jupyter Deployments
- KubeCon NA 2019: From Notebook to Pipeline
- KubeCon EU 2020: Notebook to Pipeline with HP Tuning
Ready to simplify your ML workflow?
Get started now · Road to 2.0 · Join Slack
Thanks to everyone who has contributed to Kale!


