Production-ready Python framework for building AI applications fast
RapidAI is designed for one thing: getting from idea to deployed AI application in under an hour. When your boss asks you to POC the latest AI tool, this is the framework you reach for.
A web framework that bridges the gap between Flask's simplicity and Django's batteries-included approach, but optimized specifically for modern AI development. Think of it as "the Rails of AI apps" - convention over configuration, but for LLM-powered applications.
- π€ Zero-config LLM integration - Built-in support for Anthropic Claude, OpenAI, Cohere with unified interface
- π‘ Streaming by default - SSE/WebSocket streaming built into routes, not bolted on
- π Background jobs - Async task processing with automatic retry and job tracking
- π Built-in monitoring - Token usage, cost tracking, and metrics dashboard
- π¨ UI components - Pre-built chat interfaces with customizable themes
- π RAG system - Document loading, embeddings, vector DB integration for retrieval
- π Prompt management - Version control and templating for prompts with Jinja2
- πΎ Smart caching - Semantic caching using embedding similarity
- π§ͺ Testing utilities - TestClient, MockLLM, MockMemory for easy testing
- β‘ CLI tool - Project templates, dev server, deployment, and more
from rapidai import App, LLM
app = App()
llm = LLM("claude-3-haiku-20240307")
@app.route("/chat", methods=["POST"])
async def chat(message: str):
response = await llm.complete(message)
return {"response": response}
if __name__ == "__main__":
app.run()from rapidai import App, LLM
app = App()
llm = LLM("claude-3-haiku-20240307")
@app.route("/chat", methods=["POST"])
async def chat(message: str):
async for chunk in llm.stream(message):
yield chunk
if __name__ == "__main__":
app.run()from rapidai import App, background
app = App()
@background(max_retries=3)
async def process_document(doc_id: str):
# Long-running task runs in background
await analyze_document(doc_id)
@app.route("/process", methods=["POST"])
async def start_processing(doc_id: str):
job = await process_document(doc_id)
return {"job_id": job.id, "status": job.status}from rapidai import App, LLM, monitor
app = App()
llm = LLM("claude-3-haiku-20240307")
@app.route("/chat", methods=["POST"])
@monitor() # Automatically tracks tokens and costs
async def chat(message: str):
return await llm.complete(message)
@app.route("/metrics")
async def metrics():
return app.get_metrics_html() # Built-in dashboardpip install rapidai-frameworkInstall with specific features:
# Anthropic Claude support
pip install "rapidai-framework[anthropic]"
# OpenAI support
pip install "rapidai-framework[openai]"
# RAG (document loading, embeddings, vector DB)
pip install "rapidai-framework[rag]"
# Redis (for caching and memory)
pip install "rapidai-framework[redis]"
# Everything
pip install "rapidai-framework[all]"
# Development tools
pip install "rapidai-framework[dev]"- β App class - Fast async web server with routing
- β LLM clients - Anthropic Claude, OpenAI, Cohere with unified interface
- β Streaming - Built-in SSE support for real-time responses
- β Memory - Conversation history (in-memory and Redis)
- β Caching - Semantic caching with embedding similarity
- β Config - Environment-based configuration with Pydantic
- β
Background jobs -
@backgrounddecorator with retry logic and job tracking - β
Monitoring -
@monitordecorator with token/cost tracking and HTML dashboard - β RAG system - Document loading (PDF, DOCX, TXT, HTML, MD), embeddings, vector DB
- β Prompt management - Template-based prompts with Jinja2 and versioning
- β UI components - Pre-built chat interfaces with themes and customization
- β Testing utilities - TestClient, MockLLM, MockMemory for easy testing
- β
CLI tool -
rapidai new,rapidai dev,rapidai deploy,rapidai test - β Project templates - Chatbot, RAG, Agent, API templates
- β Type hints - Full type coverage for IDE support
- β Documentation - Complete guides and API references at https://shaungehring.github.io/rapidai/
Version: 1.0.0 - Production Ready π
See CHANGELOG.md for release notes.
Perfect for building:
- π€ Chat applications - Customer support bots, AI assistants
- π RAG systems - Document Q&A, knowledge bases
- π§ Internal tools - AI-powered dashboards and workflows
- π Data processing - Background jobs for document analysis
- π AI APIs - REST endpoints with LLM integration
- π― Rapid prototypes - POCs and MVPs in under an hour
- Convention over configuration - Sensible defaults, minimal boilerplate
- Provider agnostic - Swap OpenAI for Anthropic with one line
- Async-first - Built on modern async/await patterns
- Type-safe - Full type hints for excellent IDE support
- Batteries included - Everything you need, nothing you don't
- Production ready - Monitoring, testing, deployment from day one
Complete documentation available at https://shaungehring.github.io/rapidai/
- Getting Started Guide
- LLM Integration
- Background Jobs
- Monitoring & Metrics
- RAG System
- UI Components
- Testing Guide
- Deployment
- API Reference
RapidAI includes a powerful CLI for project scaffolding and management:
# Create a new project from template
rapidai new my-chatbot --template chatbot
# Start development server with hot reload
rapidai dev
# Run tests
rapidai test
# Deploy to cloud platforms
rapidai deploy --platform vercel
# Generate documentation
rapidai docsAvailable templates:
chatbot- Simple chat applicationrag- RAG system with document Q&Aagent- AI agent with toolsapi- REST API with LLM endpoints
# Clone the repository
git clone https://github.com/shaungehring/rapidai.git
cd rapidai
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run tests
pytest
# Run tests with coverage
pytest --cov=rapidai tests/
# Type check
mypy rapidai
# Lint and format
ruff check rapidai
ruff format rapidai- π Documentation - https://shaungehring.github.io/rapidai/
- π Bug Reports - GitHub Issues
- π‘ Feature Requests - GitHub Discussions
- π¦ PyPI Package - pypi.org/project/rapidai-framework
- π Changelog - CHANGELOG.md
RapidAI is available on PyPI. To publish a new version:
# Test on TestPyPI first
./scripts/publish.sh test
# Publish to production PyPI
./scripts/publish.sh prodSee PUBLISHING.md for complete publishing guide.
MIT License - see LICENSE file for details.
We welcome contributions! Whether it's:
- π Bug fixes
- β¨ New features
- π Documentation improvements
- π§ͺ Test coverage
- π‘ Ideas and suggestions
See CONTRIBUTING.md for guidelines on how to contribute.
If you find RapidAI helpful, please consider:
- β Starring the GitHub repository
- π’ Sharing with your network
- π Reporting issues you encounter
- π‘ Suggesting new features
Built with β€οΈ for AI engineers who move fast
Version: 1.0.0 | Status: Production Ready π