Skip to content

Latest commit

 

History

History

README.md

🧠 A2A Multi-Agent Fact Checker

This project demonstrates a collaborative multi-agent system built with the Agent2Agent SDK (A2A) and OpenAI, where a top-level Auditor agent coordinates the workflow to verify facts. The Critic agent gathers evidence via live internet searches using DuckDuckGo through the Model Context Protocol (MCP), while the Reviser agent analyzes and refines the conclusion using internal reasoning alone. The system showcases how agents with distinct roles and tools can collaborate under orchestration.

Tip

✨ No configuration needed — run it with a single command.

A2A Multi-Agent Fact Check Demo

🚀 Getting Started

Requirements

Run the project

Create a secret.openai-api-key file with your OpenAI API key:

sk-...

Then run:

docker compose up --build

Everything runs from the container. Open http://localhost:8080 in your browser and then chat with the agents.

🧠 Inference Options

By default, this project uses OpenAI to handle LLM inference. If you'd prefer to use a local LLM instead, run:

docker compose -f compose.dmr.yaml up

Using Docker Offload with GPU support, you can run the same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:

docker compose -f compose.dmr.yaml -f compose.offload.yaml up --build

❓ What Can It Do?

This system performs multi-agent fact verification, coordinated by an Auditor:

  • 🧑‍⚖️ Auditor:
    • Orchestrates the process from input to verdict.
    • Delegates tasks to Critic and Reviser agents.
  • 🧠 Critic:
    • Uses DuckDuckGo via MCP to gather real-time external evidence.
  • ✍️ Reviser:
    • Refines and verifies the Critic’s conclusions using only reasoning.

🧠 All agents use the Docker Model Runner for LLM-based inference.

Example question:

“Is the universe infinite?"

🧱 Project Structure

File/Folder Purpose
compose.yaml Launches app and MCP DuckDuckGo Gateway
Dockerfile Builds the agent container
src/AgentKit Agent runtime
agents/*.yaml Agent definitions

🔧 Architecture Overview

flowchart TD
    input[📝 User Question] --> auditor[🧑‍⚖️ Auditor Sequential Agent]
    auditor --> critic[🧠 Critic Agent]
    critic -->|uses| mcp[MCP Gateway<br/>DuckDuckGo Search]
    mcp --> duck[🌐 DuckDuckGo API]
    duck --> mcp --> critic
    critic --> reviser[(✍️ Reviser Agent<br/>No tools)]
    auditor --> reviser
    reviser --> auditor
    auditor --> result[✅ Final Answer]

    critic -->|inference| model[(🧠 Docker Model Runner<br/>LLM)]
    reviser -->|inference| model

    subgraph Infra
      mcp
      model
    end

Loading
  • The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
  • The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
  • The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
  • All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.

🤝 Agent Roles

Agent Tools Used Role Description
Auditor ❌ None Coordinates the entire fact-checking workflow and delivers the final answer.
Critic ✅ DuckDuckGo via MCP Gathers evidence to support or refute the claim
Reviser ❌ None Refines and finalizes the answer without external input

🧹 Cleanup

To stop and remove containers and volumes:

docker compose down -v

📎 Credits