Cryptographic audit receipts for AI coding agents. Ed25519 + Merkle + RFC 3161 TSA. Supports Claude Code & Cursor.
-
Updated
May 5, 2026 - HTML
Cryptographic audit receipts for AI coding agents. Ed25519 + Merkle + RFC 3161 TSA. Supports Claude Code & Cursor.
ATLAST Protocol — The Trust Layer for the Agent Economy. Make AI agent work verifiable with Evidence Chain Protocol (ECP). Open source · MIT License · weba0.com
Append-only event kernel with Ed25519-signed Merkle checkpoints. Every AI action gets a verifiable receipt.
Cryptographic receipt system for AI agent accountability. Tamper-evident, hash-chained receipts with Ed25519/HMAC signing.
LUMINA-30: non-binding civilizational boundary framework for preserving effective human refusal authority before irreversible external impact from advanced AI systems.
Measurement infrastructure for multi-turn AI interaction safety evaluation
AISS v2.0.0 — standalone release
Eziokwu: Heart-centered AI accountability framework for verifying algorithmic decision-making and organizing evidence for regulatory evaluation. Truth infrastructure built on Igbo philosophical principles.
Official CLG wrapper for Model Context Protocol: tamper-evident decision and outcome receipts and real-time mandate enforcement for MCP tool calls.
Research papers and exploratory scenario corpus for the JEP/HJS/JAC protocol stack.
Go SDK seed for creating and verifying JEP v0.6 events through the JEP API.
A shows bottlenecks in human only workflows while B is for agentic and HITL workflows to ensure accountability and to prevent automation bias
JEP API v0.6 seed: FastAPI service for creating and verifying JEP-Core events with Ed25519, detached JWS shape, ext/ext_crit, event hashes, and validation results.
JavaScript SDK seed for creating and verifying JEP v0.6 events through the JEP API.
OpenExecution Provenance Specification — implements AEGIS (Agent Execution Governance and Integrity Standard) for auditable, tamper-evident AI agent behavioral records. Apache 2.0.
Gamified accountability system for Claude Code workflows with progressive consequences, strikes, and rewards. Based on ArXiv 2506.01347 NSR research.
Neutral reference framework for institutional accountability and post-incident review in high-risk autonomous AI systems.
Deterministic local proof harness for verifying AI agent recommendation claims against source-of-truth fixtures, with append-only outcome logs.
Python SDK seed for creating and verifying JEP v0.6 events through the JEP API.
Add a description, image, and links to the ai-accountability topic page so that developers can more easily learn about it.
To associate your repository with the ai-accountability topic, visit your repo's landing page and select "manage topics."