Dream it, Pixel it — by Pink Pixel
The Deep Research MCP Server is a Model Context Protocol (MCP) compliant server for comprehensive, up-to-date web research. It leverages Tavily's Search & Crawl APIs to gather, aggregate, and structure information for LLM-powered documentation generation.
- MCP Server (Node.js, TypeScript)
- Stdio Transport for agent/server communication
- Tavily API Integration (Search + Crawl)
- Configurable Documentation Prompt (default, ENV, or per-request)
- Structured JSON Output for LLMs
Architecture Diagram (Text)
[LLM/Agent]
│
▼
[Deep Research MCP Server]
│ ├─> Tavily Search API
│ └─> Tavily Crawl API
▼
[Aggregated JSON Output + Documentation Instructions]
- Multi-Step Research: Combines AI-powered search with deep content crawling
- Structured Output: JSON with query, search summary, findings, and doc instructions
- Configurable Prompts: Override documentation style via ENV or per-request
- Configurable Output Path: Specify where research documents and images should be saved
- Granular Control: Fine-tune search/crawl with many parameters
- MCP Compliant: Plug-and-play for agent ecosystems
- Resource Optimized: Memory tracking, auto-garbage collection, and hardware acceleration support
- @modelcontextprotocol/sdk (v1.11.1) — MCP server framework
- @tavily/core (v0.5.2) — Tavily Search & Crawl APIs
- dotenv (v16.5.0) — Environment variable management
deep-research-mcp/
├── dist/ # Compiled JS output
├── src/
│ └── index.ts # Main server logic
├── assets/ # Project assets (logo)
├── README.md # Full documentation
├── OVERVIEW.md # (You are here!)
├── example_config.json # Example MCP config
├── package.json # Project metadata & dependencies
├── tsconfig.json # TypeScript config
├── CHANGELOG.md # Version history and changes
- Install & Run:
npx @pinkpixel/deep-research-mcp(quickest)- Or clone &
npm install, thennpm start
- Configure: Set
TAVILY_API_KEYin your environment (see README.md) - Integrate: Connect to your LLM/agent via MCP stdio
- Customize: Override documentation prompt via ENV or tool argument
- Output: Specify where research documents and images should be saved
- Performance: Enable hardware acceleration with
hardware_acceleration: trueparameter
- Optimized Resource Usage: Reduced default crawl limits to prevent excessive memory consumption
- MCP Protocol Compliance: Fixed response structure to properly follow MCP specifications
- Improved Error Handling: Better error reporting and handling of timeouts
- Performance Optimizations: Added optional hardware acceleration (WebGPU) support
- Smarter Crawling: Added domain validation to focus crawling and prevent overly broad searches
See CHANGELOG.md for complete version history
- See README.md for full usage, parameters, and troubleshooting
- Example config: example_config.json
- License: MIT
- Node.js: >=18.0.0 required
Made with ❤️ by Pink Pixel