Context engineering engine for AI agents with Sessions and Memory management
ContextIQ is an open-source, production-ready context engineering platform that provides persistent memory and session management for AI agents. Built as a competitor to Google's Agent Engine Memory Bank, ContextIQ offers both declarative and procedural memory capabilities with true framework-agnostic design.
- Sessions Management: Chronological conversation tracking with events and state
- Declarative Memory: Long-term memory for user preferences, facts, and context
- Procedural Memory: Workflow patterns, skills, and agent learning capabilities (planned)
- Multi-Agent Support: Built-in coordination patterns for multi-agent systems
- Framework Agnostic: Direct REST APIs work with any agent framework (ADK, LangGraph, CrewAI, custom)
- Production Ready: Scalable architecture with observability, security, and deployment strategies
- Core Services: Sessions, Memory, and API Gateway all functional
- Database Layer: PostgreSQL with Alembic migrations for schema management
- Message Queue: RabbitMQ integration for asynchronous processing
- Background Workers: Memory generation and consolidation workers
- Observability: Prometheus metrics, OpenTelemetry tracing, structured logging
- Authentication: JWT tokens and API keys with role-based permissions
- Migration Tooling: Comprehensive database migration scripts and commands
ContextIQ is built as a microservices architecture:
- Sessions Service: Manages conversation sessions, events, and temporary state
- Memory Service: Handles declarative memory generation, consolidation, and retrieval
- Procedural Memory Service: Stores workflows, skills, and agent trajectories
- Extraction Worker: Background worker for LLM-powered memory extraction
- Consolidation Worker: Background worker for memory merging and conflict resolution
- API Gateway: Unified entry point with routing and rate limiting
- Backend: Python 3.11 + FastAPI
- Database: PostgreSQL 15+ (with Alembic migrations)
- Vector Store: Qdrant
- Cache: Redis 7+
- Message Queue: RabbitMQ
- LLM Integration: LiteLLM (supports OpenAI, Anthropic, Google, etc.)
- Deployment: Docker + Kubernetes
- Python 3.11+
- Docker & Docker Compose
- uv (Python package manager)
- Make (for convenience commands)
-
Clone the repository
git clone https://github.com/yourusername/contextiq.git cd contextiq -
Install dependencies
make dev-install
-
Configure environment
cp .env.example .env # Edit .env with your API keys and configuration # IMPORTANT: Set your LLM API keys (OPENAI_API_KEY or ANTHROPIC_API_KEY) # Configure authentication settings (AUTH_JWT_SECRET_KEY, AUTH_REQUIRE_AUTH)
-
Start development environment
make dev
This will:
- Start all services (PostgreSQL, Redis, Qdrant, RabbitMQ, all microservices)
- Run database migrations
- Initialize Qdrant collections
- Initialize RabbitMQ queues
-
Verify services are running
make services-health
Once running, you can access:
- API Gateway: http://localhost:8000
- Sessions API: http://localhost:8001
- Memory API: http://localhost:8002
- Procedural Memory API: http://localhost:8003
- RabbitMQ Management UI: http://localhost:15672 (user:
contextiq, pass:contextiq_dev_password) - Qdrant Dashboard: http://localhost:6333/dashboard
# Format code (runs Black)
make format
# Run linting
make lint
# Run type checking
make type-check
# Run all tests
make test
# Run tests with coverage
make test-cov
# Run all quality checks (format + lint + type-check + test)
make check
# View logs from all services
make docker-logs
# Stop all services
make docker-down
# Clean up (removes volumes)
make docker-clean# Initialize database and extensions
make db-init
# Create a new migration
make db-create MESSAGE="add new table"
# Upgrade to latest migration
make db-upgrade
# Downgrade one revision
make db-downgrade REV=-1
# Show current revision
make db-current
# Show migration history
make db-history
# Reset database (WARNING: deletes all data)
make db-resetFor more details, see the Database Migrations Guide.
You can run services individually without Docker:
# Run Sessions service
make run-sessions
# Run Memory service
make run-memory
# Run Procedural Memory service
make run-procedural
# Run background workers
make run-workers
# Run API Gateway
make run-gateway- System Architecture - Complete architecture overview
- Data Models & Schemas
- Agent Engine Memory Bank Research
- API Usage Guide - API examples and best practices
- Authentication Guide - JWT tokens and API keys
- Database Migrations - Schema management
- Deployment Guide - Local and production deployment
- Development Guide - Development environment setup
import httpx
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:8000/api/v1/sessions",
json={
"user_id": "user_123",
"agent_id": "my_agent",
"scope": {"user_id": "user_123", "project": "alpha"}
}
)
session = response.json()
print(f"Session created: {session['id']}")response = await client.post(
f"http://localhost:8000/api/v1/sessions/{session_id}/events",
json={
"author": "user",
"invocation_id": "inv_1",
"content": {
"role": "user",
"parts": [{"text": "What's the weather today?"}]
}
}
)response = await client.post(
"http://localhost:8000/api/v1/memories/generate",
json={
"source_type": "session",
"source_reference": session_id,
"scope": {"user_id": "user_123"},
"config": {
"wait_for_completion": False # Async generation
}
}
)
job = response.json()
print(f"Memory generation job started: {job['id']}")response = await client.post(
"http://localhost:8000/api/v1/memories/search",
json={
"scope": {"user_id": "user_123"},
"search_query": "What are the user's preferences?",
"top_k": 5
}
)
memories = response.json()
for memory in memories:
print(f"- {memory['fact']} (confidence: {memory['confidence']})")# Run all tests
make test
# Run only unit tests
make test-unit
# Run only integration tests
make test-integration
# Run tests with coverage report
make test-covWe welcome contributions! Please see our Contributing Guide for details.
- Create a feature branch
- Make your changes
- Run
make checkto ensure all quality checks pass - Commit your changes (pre-commit hooks will run automatically)
- Push and create a pull request
We use:
- Black for code formatting (100 character line length)
- Ruff for linting
- mypy for type checking
- pytest for testing
All checks run automatically via pre-commit hooks and CI/CD.
ContextIQ/
├── services/ # Microservices
│ ├── sessions/ # Sessions service
│ ├── memory/ # Memory service
│ ├── procedural/ # Procedural memory service
│ ├── workers/ # Background workers
│ └── api-gateway/ # API gateway
├── shared/ # Shared code across services
│ ├── database/ # Database utilities
│ ├── models/ # Common Pydantic models
│ ├── cache/ # Redis utilities
│ └── messaging/ # RabbitMQ utilities
├── tests/ # Integration tests
├── docs/ # Documentation
├── scripts/ # Utility scripts
├── infrastructure/ # IaC (Terraform, K8s manifests)
└── alembic/ # Database migrations
MIT License - see LICENSE file for details.
- GitHub Issues: github.com/yourusername/contextiq/issues
- Documentation: docs/
- Discussions: github.com/yourusername/contextiq/discussions
- Complete Sessions Service implementation
- Complete Memory Service implementation
- Implement background workers (Memory generation, Consolidation)
- API Gateway with routing and health checks
- Database migrations with Alembic
- Authentication (JWT tokens, API keys)
- Observability (Prometheus metrics, OpenTelemetry tracing)
- RabbitMQ message queue integration
- Comprehensive documentation
- Rate limiting implementation
- Enhanced observability dashboards
- Complete Procedural Memory Service implementation
- OpenAPI/Swagger documentation enhancements
- Python SDK
- TypeScript SDK
- ADK integration adapter
- LangGraph integration examples
- CrewAI integration examples
- Kubernetes deployment manifests
- Security audit
- Performance benchmarks
- Cloud-managed offering
ContextIQ is inspired by Google's Context Engineering: Sessions & Memory whitepaper and designed to compete with Vertex AI Agent Engine Memory Bank, while providing an open-source, self-hostable alternative.
Built with ❤️ by the ContextIQ Team