Skip to content
#

alignment-research

Here are 31 public repositories matching this topic...

Recursive law learning under measurement constraints. A falsifiable SQNT-inspired testbed for autodidactic rules: internalizing structure under measurement invariants and limited observability.

  • Updated Jan 19, 2026
  • Python

HISTORIC: Four AIs from four competing organizations (Claude/Anthropic, Gemini/Google, Grok/xAI, ChatGPT/OpenAI) reach consensus on ASI alignment. "Radical honesty is the minimum energy state for superintelligence." Based on V5.3 discussion, foundation for V6.0. January 30, 2026.

  • Updated Feb 7, 2026

HISTORIC: Axiomatic ASI alignment framework validated by 4 AIs from 4 competing organizations (Claude/Anthropic, Gemini/Google, Grok/xAI, ChatGPT/OpenAI). Core: Ξ = C × I × P / H. Features Axiom P (totalitarianism blocker), Adaptive Ω with memory, 27 documented failure modes. "Efficiency without plenitude is tyranny." January 30, 2026.

  • Updated Feb 1, 2026

Toy 5. An interactive proxy decay simulator showing how optimization pressure erodes the modeling capacity required to distinguish proxy from territory — producing self-reinforcing V(t) degradation that becomes progressively harder to correct. Companion simulation for The Depth Constraint — Series 2, Part 2.

  • Updated May 16, 2026
  • HTML

Toy 6. An interactive phase-space instrument mapping Ψ = S/D — the ratio of capability to modeling depth that determines whether a system is in the viable, transitional, or failure-mode-dominant regime. Includes the Inner Crossing animation. Companion simulation for The Inner Crossing — Series 2, Part 3.

  • Updated May 16, 2026
  • HTML

Improve this page

Add a description, image, and links to the alignment-research topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the alignment-research topic, visit your repo's landing page and select "manage topics."

Learn more