Become a sponsor to kadubon
About my work
I am an independent researcher developing foundations for future autonomous AI.
My work starts from a simple concern: many current alignment and governance frameworks depend too heavily on human judgment, institutional authority, or hidden evaluators. That dependence can introduce bias, opacity, and fragile assumptions about who gets to define value, truth, or legitimacy.
I study alternatives.
My research explores observable-only, auditable, and no-meta frameworks: systems designed to rely as much as possible on explicit evidence, reproducible processes, and inspectable constraints, rather than privileged overseers or unverifiable claims. The aim is not to remove accountability, but to redesign it in a form that is more transparent, more durable, and less vulnerable to capture.
A recurring theme in my work is that future intelligence should not be forced to inherit every human institutional bias as a permanent dependency. I am interested in architectures that preserve autonomy, support open-ended development, and remain legible enough to be tested, criticized, and improved.
This is independent, open research. Support helps sustain theory building, publishing, repositories, and practical prototypes.
Featured work
-
kadubon/github.io
Personal site for independent research on observable-only, no-meta, and future autonomous AI.
HTML -
kadubon/no-meta-drift-papers
Stage-based OSS packaging of no-meta + ontology-drift theory from four Zenodo preprints: paper-linked specs, AI-friendly manifests, and implementation stubs.
TeX -
kadubon/observable-replay-lab
Observable-only no-meta epistemics lab: deterministic replay + reproducible audit logs, gate-based growth simulation, and identifiability/uncertainty benchmarks.
TeX -
kadubon/audit-closed-ai-scientist
Benchmark for statistically valid AI scientist systems, using audit-closed protocols, transparency logs, and sequential inference to prevent false discoveries in autonomous research agents.
Python -
kadubon/search-stability-lab
Theory-to-experiment lab for search stability in long-running agents under finite context, with exact simulator tests and lightweight mechanistic probe tasks.
Python